2025-09-08 00:00:07.360825 | Job console starting 2025-09-08 00:00:07.375513 | Updating git repos 2025-09-08 00:00:07.749063 | Cloning repos into workspace 2025-09-08 00:00:07.889290 | Restoring repo states 2025-09-08 00:00:07.907168 | Merging changes 2025-09-08 00:00:07.907183 | Checking out repos 2025-09-08 00:00:08.306542 | Preparing playbooks 2025-09-08 00:00:08.964046 | Running Ansible setup 2025-09-08 00:00:14.635878 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-08 00:00:16.545218 | 2025-09-08 00:00:16.545325 | PLAY [Base pre] 2025-09-08 00:00:16.568789 | 2025-09-08 00:00:16.568891 | TASK [Setup log path fact] 2025-09-08 00:00:16.596785 | orchestrator | ok 2025-09-08 00:00:16.621212 | 2025-09-08 00:00:16.621328 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-08 00:00:16.747363 | orchestrator | ok 2025-09-08 00:00:16.790745 | 2025-09-08 00:00:16.791149 | TASK [emit-job-header : Print job information] 2025-09-08 00:00:16.860920 | # Job Information 2025-09-08 00:00:16.861098 | Ansible Version: 2.16.14 2025-09-08 00:00:16.861132 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2025-09-08 00:00:16.861161 | Pipeline: periodic-midnight 2025-09-08 00:00:16.861181 | Executor: 521e9411259a 2025-09-08 00:00:16.861198 | Triggered by: https://github.com/osism/testbed 2025-09-08 00:00:16.861216 | Event ID: b884a41641f4497aaf5d5aa1027ab29b 2025-09-08 00:00:16.882194 | 2025-09-08 00:00:16.882298 | LOOP [emit-job-header : Print node information] 2025-09-08 00:00:17.339658 | orchestrator | ok: 2025-09-08 00:00:17.339794 | orchestrator | # Node Information 2025-09-08 00:00:17.339820 | orchestrator | Inventory Hostname: orchestrator 2025-09-08 00:00:17.339840 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-08 00:00:17.339858 | orchestrator | Username: zuul-testbed02 2025-09-08 00:00:17.339875 | orchestrator | Distro: Debian 12.12 2025-09-08 00:00:17.339893 | orchestrator | Provider: static-testbed 2025-09-08 00:00:17.339911 | orchestrator | Region: 2025-09-08 00:00:17.339928 | orchestrator | Label: testbed-orchestrator 2025-09-08 00:00:17.339944 | orchestrator | Product Name: OpenStack Nova 2025-09-08 00:00:17.339960 | orchestrator | Interface IP: 81.163.193.140 2025-09-08 00:00:17.350466 | 2025-09-08 00:00:17.350557 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-08 00:00:18.589377 | orchestrator -> localhost | changed 2025-09-08 00:00:18.596707 | 2025-09-08 00:00:18.596804 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-08 00:00:20.845108 | orchestrator -> localhost | changed 2025-09-08 00:00:20.857908 | 2025-09-08 00:00:20.858006 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-08 00:00:21.775194 | orchestrator -> localhost | ok 2025-09-08 00:00:21.781039 | 2025-09-08 00:00:21.781129 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-08 00:00:21.828405 | orchestrator | ok 2025-09-08 00:00:21.855767 | orchestrator | included: /var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-08 00:00:21.871659 | 2025-09-08 00:00:21.871753 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-08 00:00:24.241340 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-08 00:00:24.241516 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/work/47f168dc1bd94c728bdb6d46c2dda984_id_rsa 2025-09-08 00:00:24.241547 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/work/47f168dc1bd94c728bdb6d46c2dda984_id_rsa.pub 2025-09-08 00:00:24.241568 | orchestrator -> localhost | The key fingerprint is: 2025-09-08 00:00:24.241589 | orchestrator -> localhost | SHA256:sgd7cumd5KwDv0HYTJ6ncdLGo+bfR4bbWNN4uWYlQ7c zuul-build-sshkey 2025-09-08 00:00:24.241607 | orchestrator -> localhost | The key's randomart image is: 2025-09-08 00:00:24.241652 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-08 00:00:24.241671 | orchestrator -> localhost | | | 2025-09-08 00:00:24.241689 | orchestrator -> localhost | | | 2025-09-08 00:00:24.241705 | orchestrator -> localhost | | . | 2025-09-08 00:00:24.241721 | orchestrator -> localhost | | * + . .| 2025-09-08 00:00:24.241737 | orchestrator -> localhost | | + S B o +o| 2025-09-08 00:00:24.241760 | orchestrator -> localhost | | .* X .. OE+| 2025-09-08 00:00:24.241778 | orchestrator -> localhost | | +oX . B =o| 2025-09-08 00:00:24.241795 | orchestrator -> localhost | | Oo* oo o+ | 2025-09-08 00:00:24.241812 | orchestrator -> localhost | | ==* ..o | 2025-09-08 00:00:24.241829 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-08 00:00:24.241872 | orchestrator -> localhost | ok: Runtime: 0:00:01.422905 2025-09-08 00:00:24.248238 | 2025-09-08 00:00:24.248330 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-08 00:00:24.295360 | orchestrator | ok 2025-09-08 00:00:24.310370 | orchestrator | included: /var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-08 00:00:24.348332 | 2025-09-08 00:00:24.348433 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-08 00:00:24.391180 | orchestrator | skipping: Conditional result was False 2025-09-08 00:00:24.397355 | 2025-09-08 00:00:24.397435 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-08 00:00:25.725884 | orchestrator | changed 2025-09-08 00:00:25.735839 | 2025-09-08 00:00:25.735926 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-08 00:00:26.034672 | orchestrator | ok 2025-09-08 00:00:26.042749 | 2025-09-08 00:00:26.042857 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-08 00:00:28.630799 | orchestrator | ok 2025-09-08 00:00:28.635816 | 2025-09-08 00:00:28.635890 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-08 00:00:29.171252 | orchestrator | ok 2025-09-08 00:00:29.176130 | 2025-09-08 00:00:29.176206 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-08 00:00:29.210227 | orchestrator | skipping: Conditional result was False 2025-09-08 00:00:29.216348 | 2025-09-08 00:00:29.216451 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-08 00:00:29.977476 | orchestrator -> localhost | changed 2025-09-08 00:00:29.995999 | 2025-09-08 00:00:29.996092 | TASK [add-build-sshkey : Add back temp key] 2025-09-08 00:00:30.568036 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/work/47f168dc1bd94c728bdb6d46c2dda984_id_rsa (zuul-build-sshkey) 2025-09-08 00:00:30.568203 | orchestrator -> localhost | ok: Runtime: 0:00:00.036907 2025-09-08 00:00:30.573847 | 2025-09-08 00:00:30.573924 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-08 00:00:31.076320 | orchestrator | ok 2025-09-08 00:00:31.081120 | 2025-09-08 00:00:31.081199 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-08 00:00:31.123999 | orchestrator | skipping: Conditional result was False 2025-09-08 00:00:31.216328 | 2025-09-08 00:00:31.216436 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-08 00:00:31.585075 | orchestrator | ok 2025-09-08 00:00:31.597171 | 2025-09-08 00:00:31.597268 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-08 00:00:31.625178 | orchestrator | ok 2025-09-08 00:00:31.632349 | 2025-09-08 00:00:31.632446 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-08 00:00:31.901502 | orchestrator -> localhost | ok 2025-09-08 00:00:31.909155 | 2025-09-08 00:00:31.909235 | TASK [validate-host : Collect information about the host] 2025-09-08 00:00:33.117337 | orchestrator | ok 2025-09-08 00:00:33.131417 | 2025-09-08 00:00:33.131519 | TASK [validate-host : Sanitize hostname] 2025-09-08 00:00:33.224206 | orchestrator | ok 2025-09-08 00:00:33.228630 | 2025-09-08 00:00:33.228716 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-08 00:00:34.443372 | orchestrator -> localhost | changed 2025-09-08 00:00:34.449407 | 2025-09-08 00:00:34.449503 | TASK [validate-host : Collect information about zuul worker] 2025-09-08 00:00:35.205839 | orchestrator | ok 2025-09-08 00:00:35.211003 | 2025-09-08 00:00:35.211098 | TASK [validate-host : Write out all zuul information for each host] 2025-09-08 00:00:35.922982 | orchestrator -> localhost | changed 2025-09-08 00:00:35.933182 | 2025-09-08 00:00:35.933281 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-08 00:00:36.212976 | orchestrator | ok 2025-09-08 00:00:36.218693 | 2025-09-08 00:00:36.218779 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-08 00:01:34.127831 | orchestrator | changed: 2025-09-08 00:01:34.128010 | orchestrator | .d..t...... src/ 2025-09-08 00:01:34.128045 | orchestrator | .d..t...... src/github.com/ 2025-09-08 00:01:34.128070 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-08 00:01:34.128092 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-08 00:01:34.128112 | orchestrator | RedHat.yml 2025-09-08 00:01:34.141281 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-08 00:01:34.141298 | orchestrator | RedHat.yml 2025-09-08 00:01:34.141351 | orchestrator | = 2.2.0"... 2025-09-08 00:01:52.192588 | orchestrator | 00:01:52.192 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-08 00:01:52.219206 | orchestrator | 00:01:52.218 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-08 00:01:52.383557 | orchestrator | 00:01:52.383 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-08 00:01:52.856756 | orchestrator | 00:01:52.856 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-08 00:01:52.940192 | orchestrator | 00:01:52.939 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-08 00:01:53.612566 | orchestrator | 00:01:53.612 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-08 00:01:54.031078 | orchestrator | 00:01:54.030 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-08 00:01:54.936871 | orchestrator | 00:01:54.936 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-08 00:01:54.936960 | orchestrator | 00:01:54.936 STDOUT terraform: Providers are signed by their developers. 2025-09-08 00:01:54.936975 | orchestrator | 00:01:54.936 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-08 00:01:54.936981 | orchestrator | 00:01:54.936 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-08 00:01:54.936986 | orchestrator | 00:01:54.936 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-08 00:01:54.937024 | orchestrator | 00:01:54.936 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-08 00:01:54.937071 | orchestrator | 00:01:54.937 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-08 00:01:54.937084 | orchestrator | 00:01:54.937 STDOUT terraform: you run "tofu init" in the future. 2025-09-08 00:01:54.937182 | orchestrator | 00:01:54.937 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-08 00:01:54.938147 | orchestrator | 00:01:54.937 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-08 00:01:54.938240 | orchestrator | 00:01:54.937 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-08 00:01:54.938247 | orchestrator | 00:01:54.937 STDOUT terraform: should now work. 2025-09-08 00:01:54.938253 | orchestrator | 00:01:54.937 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-08 00:01:54.938258 | orchestrator | 00:01:54.937 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-08 00:01:54.938264 | orchestrator | 00:01:54.937 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-08 00:01:55.048053 | orchestrator | 00:01:55.046 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-08 00:01:55.048144 | orchestrator | 00:01:55.046 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-08 00:01:55.305609 | orchestrator | 00:01:55.305 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-08 00:01:55.305690 | orchestrator | 00:01:55.305 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-08 00:01:55.305703 | orchestrator | 00:01:55.305 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-08 00:01:55.305710 | orchestrator | 00:01:55.305 STDOUT terraform: for this configuration. 2025-09-08 00:01:55.457007 | orchestrator | 00:01:55.455 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-08 00:01:55.457083 | orchestrator | 00:01:55.455 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-08 00:01:55.600531 | orchestrator | 00:01:55.595 STDOUT terraform: ci.auto.tfvars 2025-09-08 00:01:55.614181 | orchestrator | 00:01:55.613 STDOUT terraform: default_custom.tf 2025-09-08 00:01:55.749418 | orchestrator | 00:01:55.749 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-08 00:01:56.713979 | orchestrator | 00:01:56.713 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-08 00:01:57.275121 | orchestrator | 00:01:57.272 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-08 00:01:57.555312 | orchestrator | 00:01:57.555 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-08 00:01:57.555409 | orchestrator | 00:01:57.555 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-08 00:01:57.555418 | orchestrator | 00:01:57.555 STDOUT terraform:  + create 2025-09-08 00:01:57.555424 | orchestrator | 00:01:57.555 STDOUT terraform:  <= read (data resources) 2025-09-08 00:01:57.555430 | orchestrator | 00:01:57.555 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-08 00:01:57.555435 | orchestrator | 00:01:57.555 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-08 00:01:57.555439 | orchestrator | 00:01:57.555 STDOUT terraform:  # (config refers to values not yet known) 2025-09-08 00:01:57.555446 | orchestrator | 00:01:57.555 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-08 00:01:57.555487 | orchestrator | 00:01:57.555 STDOUT terraform:  + checksum = (known after apply) 2025-09-08 00:01:57.555514 | orchestrator | 00:01:57.555 STDOUT terraform:  + created_at = (known after apply) 2025-09-08 00:01:57.555551 | orchestrator | 00:01:57.555 STDOUT terraform:  + file = (known after apply) 2025-09-08 00:01:57.555574 | orchestrator | 00:01:57.555 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.555638 | orchestrator | 00:01:57.555 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.555661 | orchestrator | 00:01:57.555 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-08 00:01:57.555667 | orchestrator | 00:01:57.555 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-08 00:01:57.555673 | orchestrator | 00:01:57.555 STDOUT terraform:  + most_recent = true 2025-09-08 00:01:57.555730 | orchestrator | 00:01:57.555 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.555736 | orchestrator | 00:01:57.555 STDOUT terraform:  + protected = (known after apply) 2025-09-08 00:01:57.555766 | orchestrator | 00:01:57.555 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.555816 | orchestrator | 00:01:57.555 STDOUT terraform:  + schema = (known after apply) 2025-09-08 00:01:57.555822 | orchestrator | 00:01:57.555 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-08 00:01:57.555850 | orchestrator | 00:01:57.555 STDOUT terraform:  + tags = (known after apply) 2025-09-08 00:01:57.555904 | orchestrator | 00:01:57.555 STDOUT terraform:  + updated_at = (known after apply) 2025-09-08 00:01:57.555910 | orchestrator | 00:01:57.555 STDOUT terraform:  } 2025-09-08 00:01:57.556059 | orchestrator | 00:01:57.555 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-08 00:01:57.556077 | orchestrator | 00:01:57.556 STDOUT terraform:  # (config refers to values not yet known) 2025-09-08 00:01:57.556133 | orchestrator | 00:01:57.556 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-08 00:01:57.556170 | orchestrator | 00:01:57.556 STDOUT terraform:  + checksum = (known after apply) 2025-09-08 00:01:57.556178 | orchestrator | 00:01:57.556 STDOUT terraform:  + created_at = (known after apply) 2025-09-08 00:01:57.556212 | orchestrator | 00:01:57.556 STDOUT terraform:  + file = (known after apply) 2025-09-08 00:01:57.556249 | orchestrator | 00:01:57.556 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.556271 | orchestrator | 00:01:57.556 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.556301 | orchestrator | 00:01:57.556 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-08 00:01:57.556344 | orchestrator | 00:01:57.556 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-08 00:01:57.556359 | orchestrator | 00:01:57.556 STDOUT terraform:  + most_recent = true 2025-09-08 00:01:57.556382 | orchestrator | 00:01:57.556 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.556420 | orchestrator | 00:01:57.556 STDOUT terraform:  + protected = (known after apply) 2025-09-08 00:01:57.556461 | orchestrator | 00:01:57.556 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.556468 | orchestrator | 00:01:57.556 STDOUT terraform:  + schema = (known after apply) 2025-09-08 00:01:57.556505 | orchestrator | 00:01:57.556 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-08 00:01:57.556546 | orchestrator | 00:01:57.556 STDOUT terraform:  + tags = (known after apply) 2025-09-08 00:01:57.556555 | orchestrator | 00:01:57.556 STDOUT terraform:  + updated_at = (known after apply) 2025-09-08 00:01:57.556561 | orchestrator | 00:01:57.556 STDOUT terraform:  } 2025-09-08 00:01:57.556781 | orchestrator | 00:01:57.556 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-08 00:01:57.556822 | orchestrator | 00:01:57.556 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-08 00:01:57.556852 | orchestrator | 00:01:57.556 STDOUT terraform:  + content = (known after apply) 2025-09-08 00:01:57.556889 | orchestrator | 00:01:57.556 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:57.556926 | orchestrator | 00:01:57.556 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:57.556963 | orchestrator | 00:01:57.556 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:57.557003 | orchestrator | 00:01:57.556 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:57.557039 | orchestrator | 00:01:57.556 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:57.557079 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:57.557103 | orchestrator | 00:01:57.557 STDOUT terraform:  + directory_permission = "0777" 2025-09-08 00:01:57.557157 | orchestrator | 00:01:57.557 STDOUT terraform:  + file_permission = "0644" 2025-09-08 00:01:57.557182 | orchestrator | 00:01:57.557 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-08 00:01:57.557228 | orchestrator | 00:01:57.557 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.557233 | orchestrator | 00:01:57.557 STDOUT terraform:  } 2025-09-08 00:01:57.557270 | orchestrator | 00:01:57.557 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-08 00:01:57.557296 | orchestrator | 00:01:57.557 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-08 00:01:57.557335 | orchestrator | 00:01:57.557 STDOUT terraform:  + content = (known after apply) 2025-09-08 00:01:57.557373 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:57.557409 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:57.557446 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:57.557488 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:57.557522 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:57.557569 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:57.557577 | orchestrator | 00:01:57.557 STDOUT terraform:  + directory_permission = "0777" 2025-09-08 00:01:57.557607 | orchestrator | 00:01:57.557 STDOUT terraform:  + file_permission = "0644" 2025-09-08 00:01:57.557652 | orchestrator | 00:01:57.557 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-08 00:01:57.557680 | orchestrator | 00:01:57.557 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.557687 | orchestrator | 00:01:57.557 STDOUT terraform:  } 2025-09-08 00:01:57.557733 | orchestrator | 00:01:57.557 STDOUT terraform:  # local_file.inventory will be created 2025-09-08 00:01:57.557740 | orchestrator | 00:01:57.557 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-08 00:01:57.557778 | orchestrator | 00:01:57.557 STDOUT terraform:  + content = (known after apply) 2025-09-08 00:01:57.557826 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:57.557852 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:57.557888 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:57.557927 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:57.557964 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:57.558000 | orchestrator | 00:01:57.557 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:57.558051 | orchestrator | 00:01:57.557 STDOUT terraform:  + directory_permission = "0777" 2025-09-08 00:01:57.558072 | orchestrator | 00:01:57.558 STDOUT terraform:  + file_permission = "0644" 2025-09-08 00:01:57.558153 | orchestrator | 00:01:57.558 STDOUT terraform:  + filename = "inventory.ci" 2025-09-08 00:01:57.558178 | orchestrator | 00:01:57.558 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.558197 | orchestrator | 00:01:57.558 STDOUT terraform:  } 2025-09-08 00:01:57.558245 | orchestrator | 00:01:57.558 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-08 00:01:57.558273 | orchestrator | 00:01:57.558 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-08 00:01:57.558306 | orchestrator | 00:01:57.558 STDOUT terraform:  + content = (sensitive value) 2025-09-08 00:01:57.558345 | orchestrator | 00:01:57.558 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-08 00:01:57.558379 | orchestrator | 00:01:57.558 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-08 00:01:57.558423 | orchestrator | 00:01:57.558 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-08 00:01:57.558452 | orchestrator | 00:01:57.558 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-08 00:01:57.558496 | orchestrator | 00:01:57.558 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-08 00:01:57.558523 | orchestrator | 00:01:57.558 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-08 00:01:57.558549 | orchestrator | 00:01:57.558 STDOUT terraform:  + directory_permission = "0700" 2025-09-08 00:01:57.558584 | orchestrator | 00:01:57.558 STDOUT terraform:  + file_permission = "0600" 2025-09-08 00:01:57.558606 | orchestrator | 00:01:57.558 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-08 00:01:57.558646 | orchestrator | 00:01:57.558 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.558653 | orchestrator | 00:01:57.558 STDOUT terraform:  } 2025-09-08 00:01:57.558706 | orchestrator | 00:01:57.558 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-08 00:01:57.558747 | orchestrator | 00:01:57.558 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-08 00:01:57.558753 | orchestrator | 00:01:57.558 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.558760 | orchestrator | 00:01:57.558 STDOUT terraform:  } 2025-09-08 00:01:57.558817 | orchestrator | 00:01:57.558 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-08 00:01:57.558873 | orchestrator | 00:01:57.558 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-08 00:01:57.558902 | orchestrator | 00:01:57.558 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.558938 | orchestrator | 00:01:57.558 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.558967 | orchestrator | 00:01:57.558 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.559026 | orchestrator | 00:01:57.558 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.559034 | orchestrator | 00:01:57.558 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.559080 | orchestrator | 00:01:57.559 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-08 00:01:57.559126 | orchestrator | 00:01:57.559 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.559151 | orchestrator | 00:01:57.559 STDOUT terraform:  + size = 80 2025-09-08 00:01:57.559194 | orchestrator | 00:01:57.559 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.559202 | orchestrator | 00:01:57.559 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.559208 | orchestrator | 00:01:57.559 STDOUT terraform:  } 2025-09-08 00:01:57.559279 | orchestrator | 00:01:57.559 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-08 00:01:57.559306 | orchestrator | 00:01:57.559 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:57.559351 | orchestrator | 00:01:57.559 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.559358 | orchestrator | 00:01:57.559 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.559412 | orchestrator | 00:01:57.559 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.559465 | orchestrator | 00:01:57.559 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.559473 | orchestrator | 00:01:57.559 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.559523 | orchestrator | 00:01:57.559 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-08 00:01:57.559560 | orchestrator | 00:01:57.559 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.559584 | orchestrator | 00:01:57.559 STDOUT terraform:  + size = 80 2025-09-08 00:01:57.559606 | orchestrator | 00:01:57.559 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.559621 | orchestrator | 00:01:57.559 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.559642 | orchestrator | 00:01:57.559 STDOUT terraform:  } 2025-09-08 00:01:57.559688 | orchestrator | 00:01:57.559 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-08 00:01:57.559737 | orchestrator | 00:01:57.559 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:57.559772 | orchestrator | 00:01:57.559 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.559806 | orchestrator | 00:01:57.559 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.559836 | orchestrator | 00:01:57.559 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.559892 | orchestrator | 00:01:57.559 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.559899 | orchestrator | 00:01:57.559 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.559950 | orchestrator | 00:01:57.559 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-08 00:01:57.559988 | orchestrator | 00:01:57.559 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.560004 | orchestrator | 00:01:57.559 STDOUT terraform:  + size = 80 2025-09-08 00:01:57.560032 | orchestrator | 00:01:57.559 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.560047 | orchestrator | 00:01:57.560 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.560070 | orchestrator | 00:01:57.560 STDOUT terraform:  } 2025-09-08 00:01:57.560134 | orchestrator | 00:01:57.560 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-08 00:01:57.560186 | orchestrator | 00:01:57.560 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:57.560240 | orchestrator | 00:01:57.560 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.560249 | orchestrator | 00:01:57.560 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.560281 | orchestrator | 00:01:57.560 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.560331 | orchestrator | 00:01:57.560 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.560341 | orchestrator | 00:01:57.560 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.560402 | orchestrator | 00:01:57.560 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-08 00:01:57.560435 | orchestrator | 00:01:57.560 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.560442 | orchestrator | 00:01:57.560 STDOUT terraform:  + size = 80 2025-09-08 00:01:57.560470 | orchestrator | 00:01:57.560 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.560504 | orchestrator | 00:01:57.560 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.560510 | orchestrator | 00:01:57.560 STDOUT terraform:  } 2025-09-08 00:01:57.560555 | orchestrator | 00:01:57.560 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-08 00:01:57.560600 | orchestrator | 00:01:57.560 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:57.560636 | orchestrator | 00:01:57.560 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.560677 | orchestrator | 00:01:57.560 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.560699 | orchestrator | 00:01:57.560 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.560735 | orchestrator | 00:01:57.560 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.560774 | orchestrator | 00:01:57.560 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.560817 | orchestrator | 00:01:57.560 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-08 00:01:57.560857 | orchestrator | 00:01:57.560 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.560879 | orchestrator | 00:01:57.560 STDOUT terraform:  + size = 80 2025-09-08 00:01:57.560926 | orchestrator | 00:01:57.560 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.560932 | orchestrator | 00:01:57.560 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.560937 | orchestrator | 00:01:57.560 STDOUT terraform:  } 2025-09-08 00:01:57.560981 | orchestrator | 00:01:57.560 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-08 00:01:57.561033 | orchestrator | 00:01:57.560 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:57.561064 | orchestrator | 00:01:57.561 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.561097 | orchestrator | 00:01:57.561 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.561142 | orchestrator | 00:01:57.561 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.561175 | orchestrator | 00:01:57.561 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.561211 | orchestrator | 00:01:57.561 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.561259 | orchestrator | 00:01:57.561 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-08 00:01:57.561289 | orchestrator | 00:01:57.561 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.561311 | orchestrator | 00:01:57.561 STDOUT terraform:  + size = 80 2025-09-08 00:01:57.561336 | orchestrator | 00:01:57.561 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.561361 | orchestrator | 00:01:57.561 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.561368 | orchestrator | 00:01:57.561 STDOUT terraform:  } 2025-09-08 00:01:57.561419 | orchestrator | 00:01:57.561 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-08 00:01:57.561483 | orchestrator | 00:01:57.561 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-08 00:01:57.561490 | orchestrator | 00:01:57.561 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.561519 | orchestrator | 00:01:57.561 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.561558 | orchestrator | 00:01:57.561 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.561594 | orchestrator | 00:01:57.561 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.561631 | orchestrator | 00:01:57.561 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.561675 | orchestrator | 00:01:57.561 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-08 00:01:57.561712 | orchestrator | 00:01:57.561 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.561735 | orchestrator | 00:01:57.561 STDOUT terraform:  + size = 80 2025-09-08 00:01:57.561753 | orchestrator | 00:01:57.561 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.561789 | orchestrator | 00:01:57.561 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.561796 | orchestrator | 00:01:57.561 STDOUT terraform:  } 2025-09-08 00:01:57.561854 | orchestrator | 00:01:57.561 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-08 00:01:57.561894 | orchestrator | 00:01:57.561 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.561936 | orchestrator | 00:01:57.561 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.561943 | orchestrator | 00:01:57.561 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.561987 | orchestrator | 00:01:57.561 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.562025 | orchestrator | 00:01:57.561 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.562774 | orchestrator | 00:01:57.562 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-08 00:01:57.563400 | orchestrator | 00:01:57.562 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.563681 | orchestrator | 00:01:57.563 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.563936 | orchestrator | 00:01:57.563 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.564281 | orchestrator | 00:01:57.563 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.564488 | orchestrator | 00:01:57.564 STDOUT terraform:  } 2025-09-08 00:01:57.565405 | orchestrator | 00:01:57.564 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-08 00:01:57.565849 | orchestrator | 00:01:57.565 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.566194 | orchestrator | 00:01:57.565 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.566585 | orchestrator | 00:01:57.566 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.567256 | orchestrator | 00:01:57.566 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.567868 | orchestrator | 00:01:57.567 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.568601 | orchestrator | 00:01:57.567 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-08 00:01:57.569269 | orchestrator | 00:01:57.568 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.569537 | orchestrator | 00:01:57.569 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.569971 | orchestrator | 00:01:57.569 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.570178 | orchestrator | 00:01:57.569 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.570336 | orchestrator | 00:01:57.570 STDOUT terraform:  } 2025-09-08 00:01:57.570764 | orchestrator | 00:01:57.570 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-08 00:01:57.570815 | orchestrator | 00:01:57.570 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.570845 | orchestrator | 00:01:57.570 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.570859 | orchestrator | 00:01:57.570 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.570911 | orchestrator | 00:01:57.570 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.570940 | orchestrator | 00:01:57.570 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.570983 | orchestrator | 00:01:57.570 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-08 00:01:57.571020 | orchestrator | 00:01:57.570 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.571045 | orchestrator | 00:01:57.571 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.571072 | orchestrator | 00:01:57.571 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.571093 | orchestrator | 00:01:57.571 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.571099 | orchestrator | 00:01:57.571 STDOUT terraform:  } 2025-09-08 00:01:57.571251 | orchestrator | 00:01:57.571 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-08 00:01:57.571303 | orchestrator | 00:01:57.571 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.571310 | orchestrator | 00:01:57.571 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.571582 | orchestrator | 00:01:57.571 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.571870 | orchestrator | 00:01:57.571 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.571876 | orchestrator | 00:01:57.571 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.571880 | orchestrator | 00:01:57.571 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-08 00:01:57.571896 | orchestrator | 00:01:57.571 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.571912 | orchestrator | 00:01:57.571 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.571948 | orchestrator | 00:01:57.571 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.571995 | orchestrator | 00:01:57.571 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.572093 | orchestrator | 00:01:57.571 STDOUT terraform:  } 2025-09-08 00:01:57.572182 | orchestrator | 00:01:57.571 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-08 00:01:57.572236 | orchestrator | 00:01:57.571 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.572253 | orchestrator | 00:01:57.571 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.572257 | orchestrator | 00:01:57.571 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.572261 | orchestrator | 00:01:57.571 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.572265 | orchestrator | 00:01:57.571 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.572345 | orchestrator | 00:01:57.571 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-08 00:01:57.572378 | orchestrator | 00:01:57.571 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.572445 | orchestrator | 00:01:57.571 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.572464 | orchestrator | 00:01:57.571 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.572468 | orchestrator | 00:01:57.571 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.572472 | orchestrator | 00:01:57.571 STDOUT terraform:  } 2025-09-08 00:01:57.572560 | orchestrator | 00:01:57.571 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-08 00:01:57.572706 | orchestrator | 00:01:57.571 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.572831 | orchestrator | 00:01:57.571 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.572866 | orchestrator | 00:01:57.571 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.572952 | orchestrator | 00:01:57.571 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.573074 | orchestrator | 00:01:57.572 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.573122 | orchestrator | 00:01:57.572 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-08 00:01:57.573127 | orchestrator | 00:01:57.572 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.573134 | orchestrator | 00:01:57.572 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.573138 | orchestrator | 00:01:57.572 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.573142 | orchestrator | 00:01:57.572 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.573145 | orchestrator | 00:01:57.572 STDOUT terraform:  } 2025-09-08 00:01:57.573165 | orchestrator | 00:01:57.572 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-08 00:01:57.573195 | orchestrator | 00:01:57.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.573243 | orchestrator | 00:01:57.572 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.573264 | orchestrator | 00:01:57.572 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.573268 | orchestrator | 00:01:57.572 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.573272 | orchestrator | 00:01:57.572 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.573276 | orchestrator | 00:01:57.572 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-08 00:01:57.573353 | orchestrator | 00:01:57.572 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.573435 | orchestrator | 00:01:57.572 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.573461 | orchestrator | 00:01:57.572 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.573557 | orchestrator | 00:01:57.572 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.573591 | orchestrator | 00:01:57.572 STDOUT terraform:  } 2025-09-08 00:01:57.573607 | orchestrator | 00:01:57.572 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-08 00:01:57.573634 | orchestrator | 00:01:57.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.573718 | orchestrator | 00:01:57.572 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.573765 | orchestrator | 00:01:57.572 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.573799 | orchestrator | 00:01:57.572 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.573934 | orchestrator | 00:01:57.572 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.573989 | orchestrator | 00:01:57.572 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-08 00:01:57.574032 | orchestrator | 00:01:57.572 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.574063 | orchestrator | 00:01:57.572 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.574091 | orchestrator | 00:01:57.572 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.574140 | orchestrator | 00:01:57.572 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.574145 | orchestrator | 00:01:57.572 STDOUT terraform:  } 2025-09-08 00:01:57.574148 | orchestrator | 00:01:57.572 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-08 00:01:57.574152 | orchestrator | 00:01:57.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-08 00:01:57.574177 | orchestrator | 00:01:57.572 STDOUT terraform:  + attachment = (known after apply) 2025-09-08 00:01:57.574181 | orchestrator | 00:01:57.572 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.574190 | orchestrator | 00:01:57.572 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.574223 | orchestrator | 00:01:57.573 STDOUT terraform:  + metadata = (known after apply) 2025-09-08 00:01:57.574228 | orchestrator | 00:01:57.573 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-08 00:01:57.574248 | orchestrator | 00:01:57.573 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.574252 | orchestrator | 00:01:57.573 STDOUT terraform:  + size = 20 2025-09-08 00:01:57.574256 | orchestrator | 00:01:57.573 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-08 00:01:57.574260 | orchestrator | 00:01:57.573 STDOUT terraform:  + volume_type = "ssd" 2025-09-08 00:01:57.574297 | orchestrator | 00:01:57.573 STDOUT terraform:  } 2025-09-08 00:01:57.574337 | orchestrator | 00:01:57.573 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-08 00:01:57.574436 | orchestrator | 00:01:57.573 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-08 00:01:57.574452 | orchestrator | 00:01:57.573 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:57.574506 | orchestrator | 00:01:57.573 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:57.574522 | orchestrator | 00:01:57.573 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:57.574552 | orchestrator | 00:01:57.573 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.574595 | orchestrator | 00:01:57.573 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.574720 | orchestrator | 00:01:57.573 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:57.574739 | orchestrator | 00:01:57.573 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:57.574743 | orchestrator | 00:01:57.573 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:57.574747 | orchestrator | 00:01:57.573 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-08 00:01:57.574750 | orchestrator | 00:01:57.573 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:57.574808 | orchestrator | 00:01:57.573 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:57.574998 | orchestrator | 00:01:57.573 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.575003 | orchestrator | 00:01:57.573 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.575028 | orchestrator | 00:01:57.573 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:57.575217 | orchestrator | 00:01:57.573 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:57.575239 | orchestrator | 00:01:57.573 STDOUT terraform:  + name = "testbed-manager" 2025-09-08 00:01:57.575243 | orchestrator | 00:01:57.573 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:57.575247 | orchestrator | 00:01:57.573 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.575251 | orchestrator | 00:01:57.573 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:57.575305 | orchestrator | 00:01:57.573 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:57.575339 | orchestrator | 00:01:57.573 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:57.575374 | orchestrator | 00:01:57.573 STDOUT terraform:  + user_data = (sensitive value) 2025-09-08 00:01:57.575479 | orchestrator | 00:01:57.573 STDOUT terraform:  + block_device { 2025-09-08 00:01:57.575495 | orchestrator | 00:01:57.573 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:57.575527 | orchestrator | 00:01:57.573 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:57.575531 | orchestrator | 00:01:57.573 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:57.575631 | orchestrator | 00:01:57.573 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:57.575705 | orchestrator | 00:01:57.573 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:57.575722 | orchestrator | 00:01:57.574 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.575767 | orchestrator | 00:01:57.574 STDOUT terraform:  } 2025-09-08 00:01:57.575809 | orchestrator | 00:01:57.574 STDOUT terraform:  + network { 2025-09-08 00:01:57.575855 | orchestrator | 00:01:57.574 STDOUT terraform:  + access_network = false 2025-09-08 00:01:57.575874 | orchestrator | 00:01:57.574 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:57.575928 | orchestrator | 00:01:57.574 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:57.575944 | orchestrator | 00:01:57.574 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:57.576061 | orchestrator | 00:01:57.574 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.576088 | orchestrator | 00:01:57.574 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:57.576210 | orchestrator | 00:01:57.574 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.576263 | orchestrator | 00:01:57.574 STDOUT terraform:  } 2025-09-08 00:01:57.576316 | orchestrator | 00:01:57.574 STDOUT terraform:  } 2025-09-08 00:01:57.576379 | orchestrator | 00:01:57.574 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-08 00:01:57.576383 | orchestrator | 00:01:57.574 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:57.576387 | orchestrator | 00:01:57.574 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:57.576394 | orchestrator | 00:01:57.574 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:57.576398 | orchestrator | 00:01:57.574 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:57.576402 | orchestrator | 00:01:57.574 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.576405 | orchestrator | 00:01:57.574 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.576409 | orchestrator | 00:01:57.574 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:57.576413 | orchestrator | 00:01:57.574 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:57.576417 | orchestrator | 00:01:57.574 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:57.576420 | orchestrator | 00:01:57.574 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:57.576424 | orchestrator | 00:01:57.574 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:57.576428 | orchestrator | 00:01:57.574 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:57.576432 | orchestrator | 00:01:57.574 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.576436 | orchestrator | 00:01:57.574 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.576439 | orchestrator | 00:01:57.574 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:57.576443 | orchestrator | 00:01:57.574 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:57.576447 | orchestrator | 00:01:57.574 STDOUT terraform:  + name = "testbed-node-0" 2025-09-08 00:01:57.576450 | orchestrator | 00:01:57.574 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:57.576454 | orchestrator | 00:01:57.574 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.576458 | orchestrator | 00:01:57.574 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:57.576462 | orchestrator | 00:01:57.574 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:57.576465 | orchestrator | 00:01:57.574 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:57.576469 | orchestrator | 00:01:57.574 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:57.576473 | orchestrator | 00:01:57.574 STDOUT terraform:  + block_device { 2025-09-08 00:01:57.576487 | orchestrator | 00:01:57.574 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:57.576491 | orchestrator | 00:01:57.574 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:57.576495 | orchestrator | 00:01:57.575 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:57.576498 | orchestrator | 00:01:57.575 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:57.576502 | orchestrator | 00:01:57.575 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:57.576506 | orchestrator | 00:01:57.575 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.576510 | orchestrator | 00:01:57.575 STDOUT terraform:  } 2025-09-08 00:01:57.576514 | orchestrator | 00:01:57.575 STDOUT terraform:  + network { 2025-09-08 00:01:57.576517 | orchestrator | 00:01:57.575 STDOUT terraform:  + access_network = false 2025-09-08 00:01:57.576521 | orchestrator | 00:01:57.575 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:57.576525 | orchestrator | 00:01:57.575 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:57.576529 | orchestrator | 00:01:57.575 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:57.576533 | orchestrator | 00:01:57.575 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.576536 | orchestrator | 00:01:57.575 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:57.576540 | orchestrator | 00:01:57.575 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.576544 | orchestrator | 00:01:57.575 STDOUT terraform:  } 2025-09-08 00:01:57.576548 | orchestrator | 00:01:57.575 STDOUT terraform:  } 2025-09-08 00:01:57.576552 | orchestrator | 00:01:57.575 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-08 00:01:57.576555 | orchestrator | 00:01:57.575 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:57.576559 | orchestrator | 00:01:57.575 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:57.576563 | orchestrator | 00:01:57.575 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:57.576567 | orchestrator | 00:01:57.575 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:57.576571 | orchestrator | 00:01:57.575 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.576574 | orchestrator | 00:01:57.575 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.576578 | orchestrator | 00:01:57.575 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:57.576582 | orchestrator | 00:01:57.575 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:57.576586 | orchestrator | 00:01:57.575 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:57.576589 | orchestrator | 00:01:57.575 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:57.576593 | orchestrator | 00:01:57.575 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:57.576599 | orchestrator | 00:01:57.575 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:57.576606 | orchestrator | 00:01:57.575 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.576610 | orchestrator | 00:01:57.575 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.576614 | orchestrator | 00:01:57.575 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:57.576618 | orchestrator | 00:01:57.575 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:57.576621 | orchestrator | 00:01:57.575 STDOUT terraform:  + name = "testbed-node-1" 2025-09-08 00:01:57.576625 | orchestrator | 00:01:57.575 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:57.576629 | orchestrator | 00:01:57.575 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.576632 | orchestrator | 00:01:57.575 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:57.576642 | orchestrator | 00:01:57.575 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:57.576646 | orchestrator | 00:01:57.575 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:57.576650 | orchestrator | 00:01:57.575 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:57.576654 | orchestrator | 00:01:57.575 STDOUT terraform:  + block_device { 2025-09-08 00:01:57.576657 | orchestrator | 00:01:57.576 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:57.576664 | orchestrator | 00:01:57.576 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:57.576667 | orchestrator | 00:01:57.576 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:57.576671 | orchestrator | 00:01:57.576 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:57.576675 | orchestrator | 00:01:57.576 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:57.576679 | orchestrator | 00:01:57.576 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.576682 | orchestrator | 00:01:57.576 STDOUT terraform:  } 2025-09-08 00:01:57.576686 | orchestrator | 00:01:57.576 STDOUT terraform:  + network { 2025-09-08 00:01:57.576690 | orchestrator | 00:01:57.576 STDOUT terraform:  + access_network = false 2025-09-08 00:01:57.576694 | orchestrator | 00:01:57.576 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:57.576697 | orchestrator | 00:01:57.576 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:57.576701 | orchestrator | 00:01:57.576 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:57.576705 | orchestrator | 00:01:57.576 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.576709 | orchestrator | 00:01:57.576 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:57.576712 | orchestrator | 00:01:57.576 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.576716 | orchestrator | 00:01:57.576 STDOUT terraform:  } 2025-09-08 00:01:57.576720 | orchestrator | 00:01:57.576 STDOUT terraform:  } 2025-09-08 00:01:57.576724 | orchestrator | 00:01:57.576 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-08 00:01:57.576728 | orchestrator | 00:01:57.576 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:57.576734 | orchestrator | 00:01:57.576 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:57.576738 | orchestrator | 00:01:57.576 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:57.576742 | orchestrator | 00:01:57.576 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:57.576745 | orchestrator | 00:01:57.576 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.576749 | orchestrator | 00:01:57.576 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.576753 | orchestrator | 00:01:57.576 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:57.576759 | orchestrator | 00:01:57.576 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:57.576763 | orchestrator | 00:01:57.576 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:57.576766 | orchestrator | 00:01:57.576 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:57.576770 | orchestrator | 00:01:57.576 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:57.576774 | orchestrator | 00:01:57.576 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:57.576780 | orchestrator | 00:01:57.576 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.576840 | orchestrator | 00:01:57.576 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.576872 | orchestrator | 00:01:57.576 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:57.576996 | orchestrator | 00:01:57.576 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:57.577026 | orchestrator | 00:01:57.576 STDOUT terraform:  + name = "testbed-node-2" 2025-09-08 00:01:57.577070 | orchestrator | 00:01:57.576 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:57.577074 | orchestrator | 00:01:57.576 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.577318 | orchestrator | 00:01:57.576 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:57.577322 | orchestrator | 00:01:57.576 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:57.577437 | orchestrator | 00:01:57.576 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:57.577452 | orchestrator | 00:01:57.577 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:57.577508 | orchestrator | 00:01:57.577 STDOUT terraform:  + block_device { 2025-09-08 00:01:57.577590 | orchestrator | 00:01:57.577 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:57.577594 | orchestrator | 00:01:57.577 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:57.577604 | orchestrator | 00:01:57.577 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:57.577608 | orchestrator | 00:01:57.577 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:57.577612 | orchestrator | 00:01:57.577 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:57.577616 | orchestrator | 00:01:57.577 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.577623 | orchestrator | 00:01:57.577 STDOUT terraform:  } 2025-09-08 00:01:57.577627 | orchestrator | 00:01:57.577 STDOUT terraform:  + network { 2025-09-08 00:01:57.577631 | orchestrator | 00:01:57.577 STDOUT terraform:  + access_network = false 2025-09-08 00:01:57.577635 | orchestrator | 00:01:57.577 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:57.577639 | orchestrator | 00:01:57.577 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:57.577642 | orchestrator | 00:01:57.577 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:57.577646 | orchestrator | 00:01:57.577 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.577650 | orchestrator | 00:01:57.577 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:57.577654 | orchestrator | 00:01:57.577 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.577660 | orchestrator | 00:01:57.577 STDOUT terraform:  } 2025-09-08 00:01:57.577664 | orchestrator | 00:01:57.577 STDOUT terraform:  } 2025-09-08 00:01:57.577668 | orchestrator | 00:01:57.577 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-08 00:01:57.577672 | orchestrator | 00:01:57.577 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:57.577676 | orchestrator | 00:01:57.577 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:57.577680 | orchestrator | 00:01:57.577 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:57.577684 | orchestrator | 00:01:57.577 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:57.577688 | orchestrator | 00:01:57.577 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.577691 | orchestrator | 00:01:57.577 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.577697 | orchestrator | 00:01:57.577 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:57.577703 | orchestrator | 00:01:57.577 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:57.577762 | orchestrator | 00:01:57.577 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:57.577793 | orchestrator | 00:01:57.577 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:57.577821 | orchestrator | 00:01:57.577 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:57.577842 | orchestrator | 00:01:57.577 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:57.577891 | orchestrator | 00:01:57.577 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.577940 | orchestrator | 00:01:57.577 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.577947 | orchestrator | 00:01:57.577 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:57.577951 | orchestrator | 00:01:57.577 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:57.577989 | orchestrator | 00:01:57.577 STDOUT terraform:  + name = "testbed-node-3" 2025-09-08 00:01:57.578054 | orchestrator | 00:01:57.577 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:57.578143 | orchestrator | 00:01:57.577 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.578208 | orchestrator | 00:01:57.578 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:57.578223 | orchestrator | 00:01:57.578 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:57.578282 | orchestrator | 00:01:57.578 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:57.578300 | orchestrator | 00:01:57.578 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:57.578306 | orchestrator | 00:01:57.578 STDOUT terraform:  + block_device { 2025-09-08 00:01:57.578310 | orchestrator | 00:01:57.578 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:57.578374 | orchestrator | 00:01:57.578 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:57.578491 | orchestrator | 00:01:57.578 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:57.578500 | orchestrator | 00:01:57.578 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:57.578503 | orchestrator | 00:01:57.578 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:57.578509 | orchestrator | 00:01:57.578 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.578513 | orchestrator | 00:01:57.578 STDOUT terraform:  } 2025-09-08 00:01:57.578517 | orchestrator | 00:01:57.578 STDOUT terraform:  + network { 2025-09-08 00:01:57.578521 | orchestrator | 00:01:57.578 STDOUT terraform:  + access_network = false 2025-09-08 00:01:57.578525 | orchestrator | 00:01:57.578 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:57.578528 | orchestrator | 00:01:57.578 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:57.578532 | orchestrator | 00:01:57.578 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:57.578536 | orchestrator | 00:01:57.578 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.578539 | orchestrator | 00:01:57.578 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:57.578545 | orchestrator | 00:01:57.578 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.578549 | orchestrator | 00:01:57.578 STDOUT terraform:  } 2025-09-08 00:01:57.578553 | orchestrator | 00:01:57.578 STDOUT terraform:  } 2025-09-08 00:01:57.578731 | orchestrator | 00:01:57.578 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-08 00:01:57.578736 | orchestrator | 00:01:57.578 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:57.578740 | orchestrator | 00:01:57.578 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:57.578744 | orchestrator | 00:01:57.578 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:57.578748 | orchestrator | 00:01:57.578 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:57.578753 | orchestrator | 00:01:57.578 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.578759 | orchestrator | 00:01:57.578 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.578850 | orchestrator | 00:01:57.578 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:57.578860 | orchestrator | 00:01:57.578 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:57.578864 | orchestrator | 00:01:57.578 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:57.578870 | orchestrator | 00:01:57.578 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:57.578903 | orchestrator | 00:01:57.578 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:57.578980 | orchestrator | 00:01:57.578 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:57.578986 | orchestrator | 00:01:57.578 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.578992 | orchestrator | 00:01:57.578 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.579048 | orchestrator | 00:01:57.578 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:57.579055 | orchestrator | 00:01:57.579 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:57.579099 | orchestrator | 00:01:57.579 STDOUT terraform:  + name = "testbed-node-4" 2025-09-08 00:01:57.579104 | orchestrator | 00:01:57.579 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:57.579153 | orchestrator | 00:01:57.579 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.579184 | orchestrator | 00:01:57.579 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:57.579253 | orchestrator | 00:01:57.579 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:57.579323 | orchestrator | 00:01:57.579 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:57.579331 | orchestrator | 00:01:57.579 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:57.579339 | orchestrator | 00:01:57.579 STDOUT terraform:  + block_device { 2025-09-08 00:01:57.579384 | orchestrator | 00:01:57.579 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:57.579392 | orchestrator | 00:01:57.579 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:57.579425 | orchestrator | 00:01:57.579 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:57.579455 | orchestrator | 00:01:57.579 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:57.579482 | orchestrator | 00:01:57.579 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:57.579554 | orchestrator | 00:01:57.579 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.579560 | orchestrator | 00:01:57.579 STDOUT terraform:  } 2025-09-08 00:01:57.579564 | orchestrator | 00:01:57.579 STDOUT terraform:  + network { 2025-09-08 00:01:57.579570 | orchestrator | 00:01:57.579 STDOUT terraform:  + access_network = false 2025-09-08 00:01:57.579636 | orchestrator | 00:01:57.579 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:57.579641 | orchestrator | 00:01:57.579 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:57.579647 | orchestrator | 00:01:57.579 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:57.579717 | orchestrator | 00:01:57.579 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.579723 | orchestrator | 00:01:57.579 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:57.579729 | orchestrator | 00:01:57.579 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.579806 | orchestrator | 00:01:57.579 STDOUT terraform:  } 2025-09-08 00:01:57.579813 | orchestrator | 00:01:57.579 STDOUT terraform:  } 2025-09-08 00:01:57.579817 | orchestrator | 00:01:57.579 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-08 00:01:57.579842 | orchestrator | 00:01:57.579 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-08 00:01:57.579889 | orchestrator | 00:01:57.579 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-08 00:01:57.579897 | orchestrator | 00:01:57.579 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-08 00:01:57.579941 | orchestrator | 00:01:57.579 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-08 00:01:57.580022 | orchestrator | 00:01:57.579 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.580028 | orchestrator | 00:01:57.579 STDOUT terraform:  + availability_zone = "nova" 2025-09-08 00:01:57.580032 | orchestrator | 00:01:57.579 STDOUT terraform:  + config_drive = true 2025-09-08 00:01:57.580037 | orchestrator | 00:01:57.580 STDOUT terraform:  + created = (known after apply) 2025-09-08 00:01:57.580087 | orchestrator | 00:01:57.580 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-08 00:01:57.580095 | orchestrator | 00:01:57.580 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-08 00:01:57.580165 | orchestrator | 00:01:57.580 STDOUT terraform:  + force_delete = false 2025-09-08 00:01:57.580171 | orchestrator | 00:01:57.580 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-08 00:01:57.580253 | orchestrator | 00:01:57.580 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.580259 | orchestrator | 00:01:57.580 STDOUT terraform:  + image_id = (known after apply) 2025-09-08 00:01:57.580265 | orchestrator | 00:01:57.580 STDOUT terraform:  + image_name = (known after apply) 2025-09-08 00:01:57.580288 | orchestrator | 00:01:57.580 STDOUT terraform:  + key_pair = "testbed" 2025-09-08 00:01:57.580350 | orchestrator | 00:01:57.580 STDOUT terraform:  + name = "testbed-node-5" 2025-09-08 00:01:57.580356 | orchestrator | 00:01:57.580 STDOUT terraform:  + power_state = "active" 2025-09-08 00:01:57.580387 | orchestrator | 00:01:57.580 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.580476 | orchestrator | 00:01:57.580 STDOUT terraform:  + security_groups = (known after apply) 2025-09-08 00:01:57.580482 | orchestrator | 00:01:57.580 STDOUT terraform:  + stop_before_destroy = false 2025-09-08 00:01:57.580486 | orchestrator | 00:01:57.580 STDOUT terraform:  + updated = (known after apply) 2025-09-08 00:01:57.580494 | orchestrator | 00:01:57.580 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-08 00:01:57.580549 | orchestrator | 00:01:57.580 STDOUT terraform:  + block_device { 2025-09-08 00:01:57.580563 | orchestrator | 00:01:57.580 STDOUT terraform:  + boot_index = 0 2025-09-08 00:01:57.580569 | orchestrator | 00:01:57.580 STDOUT terraform:  + delete_on_termination = false 2025-09-08 00:01:57.580651 | orchestrator | 00:01:57.580 STDOUT terraform:  + destination_type = "volume" 2025-09-08 00:01:57.580657 | orchestrator | 00:01:57.580 STDOUT terraform:  + multiattach = false 2025-09-08 00:01:57.580663 | orchestrator | 00:01:57.580 STDOUT terraform:  + source_type = "volume" 2025-09-08 00:01:57.580745 | orchestrator | 00:01:57.580 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.580751 | orchestrator | 00:01:57.580 STDOUT terraform:  } 2025-09-08 00:01:57.580755 | orchestrator | 00:01:57.580 STDOUT terraform:  + network { 2025-09-08 00:01:57.580759 | orchestrator | 00:01:57.580 STDOUT terraform:  + access_network = false 2025-09-08 00:01:57.580801 | orchestrator | 00:01:57.580 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-08 00:01:57.580808 | orchestrator | 00:01:57.580 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-08 00:01:57.580883 | orchestrator | 00:01:57.580 STDOUT terraform:  + mac = (known after apply) 2025-09-08 00:01:57.580889 | orchestrator | 00:01:57.580 STDOUT terraform:  + name = (known after apply) 2025-09-08 00:01:57.580899 | orchestrator | 00:01:57.580 STDOUT terraform:  + port = (known after apply) 2025-09-08 00:01:57.580975 | orchestrator | 00:01:57.580 STDOUT terraform:  + uuid = (known after apply) 2025-09-08 00:01:57.580981 | orchestrator | 00:01:57.580 STDOUT terraform:  } 2025-09-08 00:01:57.580984 | orchestrator | 00:01:57.580 STDOUT terraform:  } 2025-09-08 00:01:57.580990 | orchestrator | 00:01:57.580 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-08 00:01:57.581035 | orchestrator | 00:01:57.580 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-08 00:01:57.581043 | orchestrator | 00:01:57.581 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-08 00:01:57.581089 | orchestrator | 00:01:57.581 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.581096 | orchestrator | 00:01:57.581 STDOUT terraform:  + name = "testbed" 2025-09-08 00:01:57.581176 | orchestrator | 00:01:57.581 STDOUT terraform:  + private_key = (sensitive value) 2025-09-08 00:01:57.581182 | orchestrator | 00:01:57.581 STDOUT terraform:  + public_key = (known after apply) 2025-09-08 00:01:57.581188 | orchestrator | 00:01:57.581 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.581256 | orchestrator | 00:01:57.581 STDOUT terraform:  + user_id = (known after apply) 2025-09-08 00:01:57.581261 | orchestrator | 00:01:57.581 STDOUT terraform:  } 2025-09-08 00:01:57.581298 | orchestrator | 00:01:57.581 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-08 00:01:57.581393 | orchestrator | 00:01:57.581 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.581403 | orchestrator | 00:01:57.581 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.581406 | orchestrator | 00:01:57.581 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.581416 | orchestrator | 00:01:57.581 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.581448 | orchestrator | 00:01:57.581 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.581455 | orchestrator | 00:01:57.581 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.581509 | orchestrator | 00:01:57.581 STDOUT terraform:  } 2025-09-08 00:01:57.581517 | orchestrator | 00:01:57.581 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-08 00:01:57.581578 | orchestrator | 00:01:57.581 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.581586 | orchestrator | 00:01:57.581 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.581673 | orchestrator | 00:01:57.581 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.581679 | orchestrator | 00:01:57.581 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.581683 | orchestrator | 00:01:57.581 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.581717 | orchestrator | 00:01:57.581 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.581722 | orchestrator | 00:01:57.581 STDOUT terraform:  } 2025-09-08 00:01:57.581777 | orchestrator | 00:01:57.581 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-08 00:01:57.581822 | orchestrator | 00:01:57.581 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.581867 | orchestrator | 00:01:57.581 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.581874 | orchestrator | 00:01:57.581 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.581943 | orchestrator | 00:01:57.581 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.581949 | orchestrator | 00:01:57.581 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.581953 | orchestrator | 00:01:57.581 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.581958 | orchestrator | 00:01:57.581 STDOUT terraform:  } 2025-09-08 00:01:57.582085 | orchestrator | 00:01:57.581 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-08 00:01:57.582096 | orchestrator | 00:01:57.581 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.582100 | orchestrator | 00:01:57.582 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.582106 | orchestrator | 00:01:57.582 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.582227 | orchestrator | 00:01:57.582 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.582233 | orchestrator | 00:01:57.582 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.582236 | orchestrator | 00:01:57.582 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.582240 | orchestrator | 00:01:57.582 STDOUT terraform:  } 2025-09-08 00:01:57.582246 | orchestrator | 00:01:57.582 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-08 00:01:57.582307 | orchestrator | 00:01:57.582 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.582315 | orchestrator | 00:01:57.582 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.582365 | orchestrator | 00:01:57.582 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.582372 | orchestrator | 00:01:57.582 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.582440 | orchestrator | 00:01:57.582 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.582450 | orchestrator | 00:01:57.582 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.582454 | orchestrator | 00:01:57.582 STDOUT terraform:  } 2025-09-08 00:01:57.582505 | orchestrator | 00:01:57.582 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-08 00:01:57.582556 | orchestrator | 00:01:57.582 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.582564 | orchestrator | 00:01:57.582 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.582617 | orchestrator | 00:01:57.582 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.582623 | orchestrator | 00:01:57.582 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.582719 | orchestrator | 00:01:57.582 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.582728 | orchestrator | 00:01:57.582 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.582732 | orchestrator | 00:01:57.582 STDOUT terraform:  } 2025-09-08 00:01:57.582736 | orchestrator | 00:01:57.582 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-08 00:01:57.582786 | orchestrator | 00:01:57.582 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.582794 | orchestrator | 00:01:57.582 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.582832 | orchestrator | 00:01:57.582 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.582870 | orchestrator | 00:01:57.582 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.582882 | orchestrator | 00:01:57.582 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.582928 | orchestrator | 00:01:57.582 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.582937 | orchestrator | 00:01:57.582 STDOUT terraform:  } 2025-09-08 00:01:57.583021 | orchestrator | 00:01:57.582 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-08 00:01:57.583031 | orchestrator | 00:01:57.582 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.583036 | orchestrator | 00:01:57.582 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.583102 | orchestrator | 00:01:57.583 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.583130 | orchestrator | 00:01:57.583 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.583134 | orchestrator | 00:01:57.583 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.583144 | orchestrator | 00:01:57.583 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.583148 | orchestrator | 00:01:57.583 STDOUT terraform:  } 2025-09-08 00:01:57.583224 | orchestrator | 00:01:57.583 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-08 00:01:57.583387 | orchestrator | 00:01:57.583 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-08 00:01:57.583396 | orchestrator | 00:01:57.583 STDOUT terraform:  + device = (known after apply) 2025-09-08 00:01:57.583400 | orchestrator | 00:01:57.583 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.583403 | orchestrator | 00:01:57.583 STDOUT terraform:  + instance_id = (known after apply) 2025-09-08 00:01:57.583407 | orchestrator | 00:01:57.583 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.583411 | orchestrator | 00:01:57.583 STDOUT terraform:  + volume_id = (known after apply) 2025-09-08 00:01:57.583415 | orchestrator | 00:01:57.583 STDOUT terraform:  } 2025-09-08 00:01:57.583457 | orchestrator | 00:01:57.583 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-08 00:01:57.583500 | orchestrator | 00:01:57.583 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-08 00:01:57.583550 | orchestrator | 00:01:57.583 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-08 00:01:57.583557 | orchestrator | 00:01:57.583 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-08 00:01:57.583709 | orchestrator | 00:01:57.583 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.583719 | orchestrator | 00:01:57.583 STDOUT terraform:  + port_id = (known after apply) 2025-09-08 00:01:57.583722 | orchestrator | 00:01:57.583 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.583726 | orchestrator | 00:01:57.583 STDOUT terraform:  } 2025-09-08 00:01:57.583730 | orchestrator | 00:01:57.583 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-08 00:01:57.583736 | orchestrator | 00:01:57.583 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-08 00:01:57.583765 | orchestrator | 00:01:57.583 STDOUT terraform:  + address = (known after apply) 2025-09-08 00:01:57.583772 | orchestrator | 00:01:57.583 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.583817 | orchestrator | 00:01:57.583 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-08 00:01:57.583825 | orchestrator | 00:01:57.583 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.583920 | orchestrator | 00:01:57.583 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-08 00:01:57.583930 | orchestrator | 00:01:57.583 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.583934 | orchestrator | 00:01:57.583 STDOUT terraform:  + pool = "public" 2025-09-08 00:01:57.583939 | orchestrator | 00:01:57.583 STDOUT terraform:  + port_id = (known after apply) 2025-09-08 00:01:57.583946 | orchestrator | 00:01:57.583 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.583953 | orchestrator | 00:01:57.583 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.584034 | orchestrator | 00:01:57.583 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.584044 | orchestrator | 00:01:57.583 STDOUT terraform:  } 2025-09-08 00:01:57.584048 | orchestrator | 00:01:57.583 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-08 00:01:57.584105 | orchestrator | 00:01:57.584 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-08 00:01:57.584122 | orchestrator | 00:01:57.584 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.584238 | orchestrator | 00:01:57.584 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.584244 | orchestrator | 00:01:57.584 STDOUT terraform:  + availability_zone_hints = [ 2025-09-08 00:01:57.584248 | orchestrator | 00:01:57.584 STDOUT terraform:  + "nova", 2025-09-08 00:01:57.584252 | orchestrator | 00:01:57.584 STDOUT terraform:  ] 2025-09-08 00:01:57.584256 | orchestrator | 00:01:57.584 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-08 00:01:57.584262 | orchestrator | 00:01:57.584 STDOUT terraform:  + external = (known after apply) 2025-09-08 00:01:57.584308 | orchestrator | 00:01:57.584 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.584381 | orchestrator | 00:01:57.584 STDOUT terraform:  + mtu = (known after apply) 2025-09-08 00:01:57.584387 | orchestrator | 00:01:57.584 STDOUT terraform:  + name = "net-testbed-management" 2025-09-08 00:01:57.584411 | orchestrator | 00:01:57.584 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.584447 | orchestrator | 00:01:57.584 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.584519 | orchestrator | 00:01:57.584 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.584528 | orchestrator | 00:01:57.584 STDOUT terraform:  + shared = (known after apply) 2025-09-08 00:01:57.584552 | orchestrator | 00:01:57.584 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.584589 | orchestrator | 00:01:57.584 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-08 00:01:57.584612 | orchestrator | 00:01:57.584 STDOUT terraform:  + segments (known after apply) 2025-09-08 00:01:57.584630 | orchestrator | 00:01:57.584 STDOUT terraform:  } 2025-09-08 00:01:57.590281 | orchestrator | 00:01:57.584 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-08 00:01:57.590550 | orchestrator | 00:01:57.584 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-08 00:01:57.590579 | orchestrator | 00:01:57.584 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.590584 | orchestrator | 00:01:57.584 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:57.590589 | orchestrator | 00:01:57.584 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:57.590604 | orchestrator | 00:01:57.584 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.590623 | orchestrator | 00:01:57.584 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:57.590653 | orchestrator | 00:01:57.584 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:57.590658 | orchestrator | 00:01:57.584 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:57.590662 | orchestrator | 00:01:57.584 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.590679 | orchestrator | 00:01:57.584 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.590736 | orchestrator | 00:01:57.584 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:57.590809 | orchestrator | 00:01:57.585 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.590825 | orchestrator | 00:01:57.585 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.590881 | orchestrator | 00:01:57.585 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.590906 | orchestrator | 00:01:57.585 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.590951 | orchestrator | 00:01:57.585 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:57.591008 | orchestrator | 00:01:57.585 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.591028 | orchestrator | 00:01:57.585 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.591033 | orchestrator | 00:01:57.585 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:57.591037 | orchestrator | 00:01:57.585 STDOUT terraform:  } 2025-09-08 00:01:57.591053 | orchestrator | 00:01:57.585 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.591073 | orchestrator | 00:01:57.585 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:57.591077 | orchestrator | 00:01:57.585 STDOUT terraform:  } 2025-09-08 00:01:57.591081 | orchestrator | 00:01:57.585 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:57.591156 | orchestrator | 00:01:57.585 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:57.591198 | orchestrator | 00:01:57.585 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-08 00:01:57.591214 | orchestrator | 00:01:57.585 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.591329 | orchestrator | 00:01:57.585 STDOUT terraform:  } 2025-09-08 00:01:57.591343 | orchestrator | 00:01:57.585 STDOUT terraform:  } 2025-09-08 00:01:57.591418 | orchestrator | 00:01:57.585 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-08 00:01:57.591423 | orchestrator | 00:01:57.585 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:57.591533 | orchestrator | 00:01:57.585 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.591548 | orchestrator | 00:01:57.585 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:57.591565 | orchestrator | 00:01:57.585 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:57.591593 | orchestrator | 00:01:57.585 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.591610 | orchestrator | 00:01:57.585 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:57.591759 | orchestrator | 00:01:57.585 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:57.591830 | orchestrator | 00:01:57.585 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:57.591883 | orchestrator | 00:01:57.585 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.592093 | orchestrator | 00:01:57.585 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.592169 | orchestrator | 00:01:57.585 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:57.592212 | orchestrator | 00:01:57.585 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.592242 | orchestrator | 00:01:57.585 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.592358 | orchestrator | 00:01:57.585 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.592451 | orchestrator | 00:01:57.585 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.592481 | orchestrator | 00:01:57.585 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:57.592499 | orchestrator | 00:01:57.585 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.592503 | orchestrator | 00:01:57.585 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.592507 | orchestrator | 00:01:57.585 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:57.592511 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.592530 | orchestrator | 00:01:57.586 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.592535 | orchestrator | 00:01:57.586 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:57.592538 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.592542 | orchestrator | 00:01:57.586 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.592644 | orchestrator | 00:01:57.586 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:57.592690 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.592766 | orchestrator | 00:01:57.586 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.592855 | orchestrator | 00:01:57.586 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:57.592871 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.592963 | orchestrator | 00:01:57.586 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:57.592980 | orchestrator | 00:01:57.586 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:57.592996 | orchestrator | 00:01:57.586 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-08 00:01:57.593037 | orchestrator | 00:01:57.586 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.593126 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.593190 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.593195 | orchestrator | 00:01:57.586 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-08 00:01:57.593228 | orchestrator | 00:01:57.586 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:57.593237 | orchestrator | 00:01:57.586 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.593302 | orchestrator | 00:01:57.586 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:57.593333 | orchestrator | 00:01:57.586 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:57.593340 | orchestrator | 00:01:57.586 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.593344 | orchestrator | 00:01:57.586 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:57.593363 | orchestrator | 00:01:57.586 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:57.593394 | orchestrator | 00:01:57.586 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:57.593407 | orchestrator | 00:01:57.586 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.593424 | orchestrator | 00:01:57.586 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.593494 | orchestrator | 00:01:57.586 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:57.593499 | orchestrator | 00:01:57.586 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.593517 | orchestrator | 00:01:57.586 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.593563 | orchestrator | 00:01:57.586 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.593608 | orchestrator | 00:01:57.586 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.593612 | orchestrator | 00:01:57.586 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:57.593631 | orchestrator | 00:01:57.586 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.593635 | orchestrator | 00:01:57.586 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.593639 | orchestrator | 00:01:57.586 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:57.593643 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.593663 | orchestrator | 00:01:57.586 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.593667 | orchestrator | 00:01:57.586 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:57.593671 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.593732 | orchestrator | 00:01:57.586 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.593777 | orchestrator | 00:01:57.586 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:57.593809 | orchestrator | 00:01:57.586 STDOUT terraform:  } 2025-09-08 00:01:57.593813 | orchestrator | 00:01:57.587 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.593817 | orchestrator | 00:01:57.587 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:57.593821 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.593913 | orchestrator | 00:01:57.587 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:57.593929 | orchestrator | 00:01:57.587 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:57.594174 | orchestrator | 00:01:57.587 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-08 00:01:57.594206 | orchestrator | 00:01:57.587 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.594221 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.594322 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.594379 | orchestrator | 00:01:57.587 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-08 00:01:57.594405 | orchestrator | 00:01:57.587 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:57.594421 | orchestrator | 00:01:57.587 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.594504 | orchestrator | 00:01:57.587 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:57.594604 | orchestrator | 00:01:57.587 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:57.594623 | orchestrator | 00:01:57.587 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.594627 | orchestrator | 00:01:57.587 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:57.594630 | orchestrator | 00:01:57.587 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:57.594634 | orchestrator | 00:01:57.587 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:57.594653 | orchestrator | 00:01:57.587 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.594657 | orchestrator | 00:01:57.587 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.594660 | orchestrator | 00:01:57.587 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:57.594756 | orchestrator | 00:01:57.587 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.594761 | orchestrator | 00:01:57.587 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.594780 | orchestrator | 00:01:57.587 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.594784 | orchestrator | 00:01:57.587 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.594788 | orchestrator | 00:01:57.587 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:57.594792 | orchestrator | 00:01:57.587 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.594898 | orchestrator | 00:01:57.587 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.594948 | orchestrator | 00:01:57.587 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:57.594952 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.594956 | orchestrator | 00:01:57.587 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.594960 | orchestrator | 00:01:57.587 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:57.594964 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.594968 | orchestrator | 00:01:57.587 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.594972 | orchestrator | 00:01:57.587 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:57.594979 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.594983 | orchestrator | 00:01:57.587 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.594987 | orchestrator | 00:01:57.587 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:57.594991 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.594994 | orchestrator | 00:01:57.587 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:57.594998 | orchestrator | 00:01:57.587 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:57.595002 | orchestrator | 00:01:57.587 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-08 00:01:57.595006 | orchestrator | 00:01:57.587 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.595010 | orchestrator | 00:01:57.587 STDOUT terraform:  } 2025-09-08 00:01:57.595013 | orchestrator | 00:01:57.588 STDOUT terraform:  } 2025-09-08 00:01:57.595017 | orchestrator | 00:01:57.588 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-08 00:01:57.595021 | orchestrator | 00:01:57.588 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:57.595025 | orchestrator | 00:01:57.588 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.595029 | orchestrator | 00:01:57.588 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:57.595033 | orchestrator | 00:01:57.588 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:57.595037 | orchestrator | 00:01:57.588 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.595040 | orchestrator | 00:01:57.588 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:57.595044 | orchestrator | 00:01:57.588 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:57.595051 | orchestrator | 00:01:57.588 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:57.595055 | orchestrator | 00:01:57.588 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.595061 | orchestrator | 00:01:57.588 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595065 | orchestrator | 00:01:57.588 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:57.595069 | orchestrator | 00:01:57.588 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.595073 | orchestrator | 00:01:57.588 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.595083 | orchestrator | 00:01:57.588 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.595087 | orchestrator | 00:01:57.588 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595090 | orchestrator | 00:01:57.588 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:57.595094 | orchestrator | 00:01:57.588 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595098 | orchestrator | 00:01:57.588 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595102 | orchestrator | 00:01:57.588 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:57.595129 | orchestrator | 00:01:57.588 STDOUT terraform:  } 2025-09-08 00:01:57.595133 | orchestrator | 00:01:57.588 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595137 | orchestrator | 00:01:57.588 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:57.595141 | orchestrator | 00:01:57.588 STDOUT terraform:  } 2025-09-08 00:01:57.595145 | orchestrator | 00:01:57.588 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595149 | orchestrator | 00:01:57.588 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:57.595153 | orchestrator | 00:01:57.588 STDOUT terraform:  } 2025-09-08 00:01:57.595156 | orchestrator | 00:01:57.588 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595160 | orchestrator | 00:01:57.588 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:57.595164 | orchestrator | 00:01:57.588 STDOUT terraform:  } 2025-09-08 00:01:57.595168 | orchestrator | 00:01:57.588 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:57.595172 | orchestrator | 00:01:57.588 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:57.595175 | orchestrator | 00:01:57.588 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-08 00:01:57.595179 | orchestrator | 00:01:57.588 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.595183 | orchestrator | 00:01:57.588 STDOUT terraform:  } 2025-09-08 00:01:57.595187 | orchestrator | 00:01:57.588 STDOUT terraform:  } 2025-09-08 00:01:57.595190 | orchestrator | 00:01:57.588 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-08 00:01:57.595194 | orchestrator | 00:01:57.588 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:57.595198 | orchestrator | 00:01:57.588 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.595202 | orchestrator | 00:01:57.588 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:57.595206 | orchestrator | 00:01:57.589 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:57.595209 | orchestrator | 00:01:57.589 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.595213 | orchestrator | 00:01:57.589 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:57.595217 | orchestrator | 00:01:57.589 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:57.595221 | orchestrator | 00:01:57.589 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:57.595224 | orchestrator | 00:01:57.589 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.595228 | orchestrator | 00:01:57.589 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595232 | orchestrator | 00:01:57.589 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:57.595236 | orchestrator | 00:01:57.589 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.595242 | orchestrator | 00:01:57.589 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.595250 | orchestrator | 00:01:57.589 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.595253 | orchestrator | 00:01:57.589 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595257 | orchestrator | 00:01:57.589 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:57.595266 | orchestrator | 00:01:57.589 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595270 | orchestrator | 00:01:57.589 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595274 | orchestrator | 00:01:57.589 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:57.595278 | orchestrator | 00:01:57.589 STDOUT terraform:  } 2025-09-08 00:01:57.595281 | orchestrator | 00:01:57.589 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595285 | orchestrator | 00:01:57.589 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:57.595289 | orchestrator | 00:01:57.589 STDOUT terraform:  } 2025-09-08 00:01:57.595293 | orchestrator | 00:01:57.589 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595296 | orchestrator | 00:01:57.589 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:57.595300 | orchestrator | 00:01:57.589 STDOUT terraform:  } 2025-09-08 00:01:57.595304 | orchestrator | 00:01:57.589 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595308 | orchestrator | 00:01:57.589 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:57.595312 | orchestrator | 00:01:57.589 STDOUT terraform:  } 2025-09-08 00:01:57.595315 | orchestrator | 00:01:57.589 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:57.595319 | orchestrator | 00:01:57.589 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:57.595323 | orchestrator | 00:01:57.589 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-08 00:01:57.595327 | orchestrator | 00:01:57.589 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.595331 | orchestrator | 00:01:57.589 STDOUT terraform:  } 2025-09-08 00:01:57.595334 | orchestrator | 00:01:57.589 STDOUT terraform:  } 2025-09-08 00:01:57.595338 | orchestrator | 00:01:57.589 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-08 00:01:57.595342 | orchestrator | 00:01:57.589 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-08 00:01:57.595346 | orchestrator | 00:01:57.589 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.595350 | orchestrator | 00:01:57.589 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-08 00:01:57.595353 | orchestrator | 00:01:57.589 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-08 00:01:57.595357 | orchestrator | 00:01:57.589 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.595361 | orchestrator | 00:01:57.589 STDOUT terraform:  + device_id = (known after apply) 2025-09-08 00:01:57.595364 | orchestrator | 00:01:57.589 STDOUT terraform:  + device_owner = (known after apply) 2025-09-08 00:01:57.595368 | orchestrator | 00:01:57.590 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-08 00:01:57.595375 | orchestrator | 00:01:57.590 STDOUT terraform:  + dns_name = (known after apply) 2025-09-08 00:01:57.595379 | orchestrator | 00:01:57.590 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595382 | orchestrator | 00:01:57.590 STDOUT terraform:  + mac_address = (known after apply) 2025-09-08 00:01:57.595386 | orchestrator | 00:01:57.590 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.595390 | orchestrator | 00:01:57.590 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-08 00:01:57.595393 | orchestrator | 00:01:57.590 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-08 00:01:57.595397 | orchestrator | 00:01:57.590 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595401 | orchestrator | 00:01:57.590 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-08 00:01:57.595405 | orchestrator | 00:01:57.590 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595409 | orchestrator | 00:01:57.590 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595412 | orchestrator | 00:01:57.590 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-08 00:01:57.595419 | orchestrator | 00:01:57.590 STDOUT terraform:  } 2025-09-08 00:01:57.595423 | orchestrator | 00:01:57.590 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595427 | orchestrator | 00:01:57.590 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-08 00:01:57.595431 | orchestrator | 00:01:57.590 STDOUT terraform:  } 2025-09-08 00:01:57.595435 | orchestrator | 00:01:57.590 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595439 | orchestrator | 00:01:57.590 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-08 00:01:57.595442 | orchestrator | 00:01:57.590 STDOUT terraform:  } 2025-09-08 00:01:57.595446 | orchestrator | 00:01:57.590 STDOUT terraform:  + allowed_address_pairs { 2025-09-08 00:01:57.595450 | orchestrator | 00:01:57.590 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-08 00:01:57.595454 | orchestrator | 00:01:57.590 STDOUT terraform:  } 2025-09-08 00:01:57.595457 | orchestrator | 00:01:57.590 STDOUT terraform:  + binding (known after apply) 2025-09-08 00:01:57.595461 | orchestrator | 00:01:57.590 STDOUT terraform:  + fixed_ip { 2025-09-08 00:01:57.595465 | orchestrator | 00:01:57.590 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-08 00:01:57.595469 | orchestrator | 00:01:57.590 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.595473 | orchestrator | 00:01:57.590 STDOUT terraform:  } 2025-09-08 00:01:57.595477 | orchestrator | 00:01:57.590 STDOUT terraform:  } 2025-09-08 00:01:57.595480 | orchestrator | 00:01:57.590 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-08 00:01:57.595484 | orchestrator | 00:01:57.590 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-08 00:01:57.595488 | orchestrator | 00:01:57.590 STDOUT terraform:  + force_destroy = false 2025-09-08 00:01:57.595492 | orchestrator | 00:01:57.590 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595501 | orchestrator | 00:01:57.590 STDOUT terraform:  + port_id = (known after apply) 2025-09-08 00:01:57.595505 | orchestrator | 00:01:57.590 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595509 | orchestrator | 00:01:57.590 STDOUT terraform:  + router_id = (known after apply) 2025-09-08 00:01:57.595512 | orchestrator | 00:01:57.590 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-08 00:01:57.595516 | orchestrator | 00:01:57.590 STDOUT terraform:  } 2025-09-08 00:01:57.595520 | orchestrator | 00:01:57.590 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-08 00:01:57.595524 | orchestrator | 00:01:57.590 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-08 00:01:57.595527 | orchestrator | 00:01:57.590 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-08 00:01:57.595531 | orchestrator | 00:01:57.590 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.595535 | orchestrator | 00:01:57.590 STDOUT terraform:  + availability_zone_hints = [ 2025-09-08 00:01:57.595540 | orchestrator | 00:01:57.590 STDOUT terraform:  2025-09-08 00:01:57.595544 | orchestrator | 00:01:57.591 STDOUT terraform:  + "nova", 2025-09-08 00:01:57.595548 | orchestrator | 00:01:57.591 STDOUT terraform:  ] 2025-09-08 00:01:57.595552 | orchestrator | 00:01:57.591 STDOUT terraform:  + distributed = (known after apply) 2025-09-08 00:01:57.595556 | orchestrator | 00:01:57.591 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-08 00:01:57.595563 | orchestrator | 00:01:57.591 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-08 00:01:57.595569 | orchestrator | 00:01:57.591 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-08 00:01:57.595573 | orchestrator | 00:01:57.591 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595577 | orchestrator | 00:01:57.591 STDOUT terraform:  + name = "testbed" 2025-09-08 00:01:57.595580 | orchestrator | 00:01:57.591 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595587 | orchestrator | 00:01:57.591 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595591 | orchestrator | 00:01:57.591 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-08 00:01:57.595595 | orchestrator | 00:01:57.591 STDOUT terraform:  } 2025-09-08 00:01:57.595599 | orchestrator | 00:01:57.591 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-08 00:01:57.595604 | orchestrator | 00:01:57.591 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-08 00:01:57.595608 | orchestrator | 00:01:57.591 STDOUT terraform:  + description = "ssh" 2025-09-08 00:01:57.595612 | orchestrator | 00:01:57.591 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.595616 | orchestrator | 00:01:57.591 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.595619 | orchestrator | 00:01:57.591 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595624 | orchestrator | 00:01:57.591 STDOUT terraform:  + port_range_max = 22 2025-09-08 00:01:57.595630 | orchestrator | 00:01:57.591 STDOUT terraform:  + port_range_min = 22 2025-09-08 00:01:57.595634 | orchestrator | 00:01:57.591 STDOUT terraform:  + protocol = "tcp" 2025-09-08 00:01:57.595638 | orchestrator | 00:01:57.591 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595642 | orchestrator | 00:01:57.591 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.595646 | orchestrator | 00:01:57.591 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.595649 | orchestrator | 00:01:57.591 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:57.595653 | orchestrator | 00:01:57.591 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.595657 | orchestrator | 00:01:57.591 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595661 | orchestrator | 00:01:57.591 STDOUT terraform:  } 2025-09-08 00:01:57.595665 | orchestrator | 00:01:57.591 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-08 00:01:57.595669 | orchestrator | 00:01:57.591 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-08 00:01:57.595672 | orchestrator | 00:01:57.591 STDOUT terraform:  + description = "wireguard" 2025-09-08 00:01:57.595676 | orchestrator | 00:01:57.591 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.595680 | orchestrator | 00:01:57.591 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.595683 | orchestrator | 00:01:57.591 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595687 | orchestrator | 00:01:57.591 STDOUT terraform:  + port_range_max = 51820 2025-09-08 00:01:57.595691 | orchestrator | 00:01:57.592 STDOUT terraform:  + port_range_min = 51820 2025-09-08 00:01:57.595695 | orchestrator | 00:01:57.592 STDOUT terraform:  + protocol = "udp" 2025-09-08 00:01:57.595699 | orchestrator | 00:01:57.592 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595702 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.595706 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.595710 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:57.595717 | orchestrator | 00:01:57.592 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.595721 | orchestrator | 00:01:57.592 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595725 | orchestrator | 00:01:57.592 STDOUT terraform:  } 2025-09-08 00:01:57.595729 | orchestrator | 00:01:57.592 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-08 00:01:57.595737 | orchestrator | 00:01:57.592 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-08 00:01:57.595741 | orchestrator | 00:01:57.592 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.595745 | orchestrator | 00:01:57.592 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.595752 | orchestrator | 00:01:57.592 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595755 | orchestrator | 00:01:57.592 STDOUT terraform:  + protocol = "tcp" 2025-09-08 00:01:57.595759 | orchestrator | 00:01:57.592 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595763 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.595767 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.595770 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-08 00:01:57.595774 | orchestrator | 00:01:57.592 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.595778 | orchestrator | 00:01:57.592 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595782 | orchestrator | 00:01:57.592 STDOUT terraform:  } 2025-09-08 00:01:57.595786 | orchestrator | 00:01:57.592 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-08 00:01:57.595789 | orchestrator | 00:01:57.592 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-08 00:01:57.595793 | orchestrator | 00:01:57.592 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.595797 | orchestrator | 00:01:57.592 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.595801 | orchestrator | 00:01:57.592 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595805 | orchestrator | 00:01:57.592 STDOUT terraform:  + protocol = "udp" 2025-09-08 00:01:57.595808 | orchestrator | 00:01:57.592 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595812 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.595816 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.595820 | orchestrator | 00:01:57.592 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-08 00:01:57.595823 | orchestrator | 00:01:57.593 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.595827 | orchestrator | 00:01:57.593 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595831 | orchestrator | 00:01:57.593 STDOUT terraform:  } 2025-09-08 00:01:57.595835 | orchestrator | 00:01:57.593 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-08 00:01:57.595839 | orchestrator | 00:01:57.593 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-08 00:01:57.595842 | orchestrator | 00:01:57.593 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.595846 | orchestrator | 00:01:57.593 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.595850 | orchestrator | 00:01:57.593 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595854 | orchestrator | 00:01:57.593 STDOUT terraform:  + protocol = "icmp" 2025-09-08 00:01:57.595862 | orchestrator | 00:01:57.593 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595866 | orchestrator | 00:01:57.593 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.595870 | orchestrator | 00:01:57.593 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.595874 | orchestrator | 00:01:57.593 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:57.595882 | orchestrator | 00:01:57.593 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.595886 | orchestrator | 00:01:57.593 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595890 | orchestrator | 00:01:57.593 STDOUT terraform:  } 2025-09-08 00:01:57.595893 | orchestrator | 00:01:57.593 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-08 00:01:57.595898 | orchestrator | 00:01:57.593 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-08 00:01:57.595901 | orchestrator | 00:01:57.593 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.595905 | orchestrator | 00:01:57.593 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.595909 | orchestrator | 00:01:57.593 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595913 | orchestrator | 00:01:57.593 STDOUT terraform:  + protocol = "tcp" 2025-09-08 00:01:57.595916 | orchestrator | 00:01:57.593 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595920 | orchestrator | 00:01:57.593 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.595924 | orchestrator | 00:01:57.593 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.595928 | orchestrator | 00:01:57.593 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:57.595931 | orchestrator | 00:01:57.593 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.595935 | orchestrator | 00:01:57.593 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595939 | orchestrator | 00:01:57.593 STDOUT terraform:  } 2025-09-08 00:01:57.595943 | orchestrator | 00:01:57.593 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-08 00:01:57.595947 | orchestrator | 00:01:57.593 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-08 00:01:57.595951 | orchestrator | 00:01:57.593 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.595954 | orchestrator | 00:01:57.593 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.595958 | orchestrator | 00:01:57.593 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.595962 | orchestrator | 00:01:57.593 STDOUT terraform:  + protocol = "udp" 2025-09-08 00:01:57.595965 | orchestrator | 00:01:57.594 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.595969 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.595976 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.595980 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:57.595984 | orchestrator | 00:01:57.594 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.595987 | orchestrator | 00:01:57.594 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.595991 | orchestrator | 00:01:57.594 STDOUT terraform:  } 2025-09-08 00:01:57.595995 | orchestrator | 00:01:57.594 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-08 00:01:57.595999 | orchestrator | 00:01:57.594 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-08 00:01:57.596003 | orchestrator | 00:01:57.594 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.596006 | orchestrator | 00:01:57.594 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.596010 | orchestrator | 00:01:57.594 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.596014 | orchestrator | 00:01:57.594 STDOUT terraform:  + protocol = "icmp" 2025-09-08 00:01:57.596021 | orchestrator | 00:01:57.594 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.596025 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.596028 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.596032 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:57.596036 | orchestrator | 00:01:57.594 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.596040 | orchestrator | 00:01:57.594 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.596044 | orchestrator | 00:01:57.594 STDOUT terraform:  } 2025-09-08 00:01:57.596047 | orchestrator | 00:01:57.594 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-08 00:01:57.596051 | orchestrator | 00:01:57.594 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-08 00:01:57.596055 | orchestrator | 00:01:57.594 STDOUT terraform:  + description = "vrrp" 2025-09-08 00:01:57.596086 | orchestrator | 00:01:57.594 STDOUT terraform:  + direction = "ingress" 2025-09-08 00:01:57.596091 | orchestrator | 00:01:57.594 STDOUT terraform:  + ethertype = "IPv4" 2025-09-08 00:01:57.596094 | orchestrator | 00:01:57.594 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.596098 | orchestrator | 00:01:57.594 STDOUT terraform:  + protocol = "112" 2025-09-08 00:01:57.596102 | orchestrator | 00:01:57.594 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.596106 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-08 00:01:57.596120 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-08 00:01:57.596124 | orchestrator | 00:01:57.594 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-08 00:01:57.596130 | orchestrator | 00:01:57.594 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-08 00:01:57.596134 | orchestrator | 00:01:57.594 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.596138 | orchestrator | 00:01:57.594 STDOUT terraform:  } 2025-09-08 00:01:57.596142 | orchestrator | 00:01:57.594 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-08 00:01:57.596145 | orchestrator | 00:01:57.595 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-08 00:01:57.596149 | orchestrator | 00:01:57.595 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.596153 | orchestrator | 00:01:57.595 STDOUT terraform:  + description = "management security group" 2025-09-08 00:01:57.596157 | orchestrator | 00:01:57.595 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.596160 | orchestrator | 00:01:57.595 STDOUT terraform:  + name = "testbed-management" 2025-09-08 00:01:57.596164 | orchestrator | 00:01:57.595 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.596168 | orchestrator | 00:01:57.595 STDOUT terraform:  + stateful = (known after apply) 2025-09-08 00:01:57.596172 | orchestrator | 00:01:57.595 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.596175 | orchestrator | 00:01:57.595 STDOUT terraform:  } 2025-09-08 00:01:57.596179 | orchestrator | 00:01:57.595 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-08 00:01:57.596186 | orchestrator | 00:01:57.595 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-08 00:01:57.596189 | orchestrator | 00:01:57.595 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.596193 | orchestrator | 00:01:57.595 STDOUT terraform:  + description = "node security group" 2025-09-08 00:01:57.596197 | orchestrator | 00:01:57.595 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.596201 | orchestrator | 00:01:57.595 STDOUT terraform:  + name = "testbed-node" 2025-09-08 00:01:57.596209 | orchestrator | 00:01:57.595 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.596213 | orchestrator | 00:01:57.595 STDOUT terraform:  + stateful = (known after apply) 2025-09-08 00:01:57.596217 | orchestrator | 00:01:57.595 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.596221 | orchestrator | 00:01:57.595 STDOUT terraform:  } 2025-09-08 00:01:57.596225 | orchestrator | 00:01:57.595 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-08 00:01:57.596228 | orchestrator | 00:01:57.595 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-08 00:01:57.596232 | orchestrator | 00:01:57.595 STDOUT terraform:  + all_tags = (known after apply) 2025-09-08 00:01:57.596236 | orchestrator | 00:01:57.595 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-08 00:01:57.596240 | orchestrator | 00:01:57.595 STDOUT terraform:  + dns_nameservers = [ 2025-09-08 00:01:57.596243 | orchestrator | 00:01:57.595 STDOUT terraform:  + "8.8.8.8", 2025-09-08 00:01:57.596250 | orchestrator | 00:01:57.595 STDOUT terraform:  + "9.9.9.9", 2025-09-08 00:01:57.596254 | orchestrator | 00:01:57.595 STDOUT terraform:  ] 2025-09-08 00:01:57.596258 | orchestrator | 00:01:57.595 STDOUT terraform:  + enable_dhcp = true 2025-09-08 00:01:57.596262 | orchestrator | 00:01:57.595 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-08 00:01:57.596266 | orchestrator | 00:01:57.595 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.596269 | orchestrator | 00:01:57.595 STDOUT terraform:  + ip_version = 4 2025-09-08 00:01:57.596273 | orchestrator | 00:01:57.595 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-08 00:01:57.596277 | orchestrator | 00:01:57.595 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-08 00:01:57.596281 | orchestrator | 00:01:57.595 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-08 00:01:57.596285 | orchestrator | 00:01:57.595 STDOUT terraform:  + network_id = (known after apply) 2025-09-08 00:01:57.596288 | orchestrator | 00:01:57.595 STDOUT terraform:  + no_gateway = false 2025-09-08 00:01:57.596292 | orchestrator | 00:01:57.595 STDOUT terraform:  + region = (known after apply) 2025-09-08 00:01:57.596296 | orchestrator | 00:01:57.595 STDOUT terraform:  + service_types = (known after apply) 2025-09-08 00:01:57.596300 | orchestrator | 00:01:57.595 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-08 00:01:57.596303 | orchestrator | 00:01:57.595 STDOUT terraform:  + allocation_pool { 2025-09-08 00:01:57.596307 | orchestrator | 00:01:57.595 STDOUT terraform:  + end = "192.168.31.250" 2025-09-08 00:01:57.596311 | orchestrator | 00:01:57.596 STDOUT terraform:  + start = "192.168.31.200" 2025-09-08 00:01:57.596315 | orchestrator | 00:01:57.596 STDOUT terraform:  } 2025-09-08 00:01:57.596319 | orchestrator | 00:01:57.596 STDOUT terraform:  } 2025-09-08 00:01:57.596323 | orchestrator | 00:01:57.596 STDOUT terraform:  # terraform_data.image will be created 2025-09-08 00:01:57.596326 | orchestrator | 00:01:57.596 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-08 00:01:57.596330 | orchestrator | 00:01:57.596 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.596334 | orchestrator | 00:01:57.596 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-08 00:01:57.596338 | orchestrator | 00:01:57.596 STDOUT terraform:  + output = (known after apply) 2025-09-08 00:01:57.596341 | orchestrator | 00:01:57.596 STDOUT terraform:  } 2025-09-08 00:01:57.596348 | orchestrator | 00:01:57.596 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-08 00:01:57.596354 | orchestrator | 00:01:57.596 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-08 00:01:57.596358 | orchestrator | 00:01:57.596 STDOUT terraform:  + id = (known after apply) 2025-09-08 00:01:57.596362 | orchestrator | 00:01:57.596 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-08 00:01:57.596366 | orchestrator | 00:01:57.596 STDOUT terraform:  + output = (known after apply) 2025-09-08 00:01:57.596369 | orchestrator | 00:01:57.596 STDOUT terraform:  } 2025-09-08 00:01:57.596373 | orchestrator | 00:01:57.596 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-08 00:01:57.596381 | orchestrator | 00:01:57.596 STDOUT terraform: Changes to Outputs: 2025-09-08 00:01:57.596385 | orchestrator | 00:01:57.596 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-08 00:01:57.596389 | orchestrator | 00:01:57.596 STDOUT terraform:  + private_key = (sensitive value) 2025-09-08 00:01:58.256327 | orchestrator | 00:01:58.256 STDOUT terraform: terraform_data.image: Creating... 2025-09-08 00:01:58.256386 | orchestrator | 00:01:58.256 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-08 00:01:58.257822 | orchestrator | 00:01:58.257 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=9248c5db-ffe8-5b4e-d0ba-8a975b1fed3c] 2025-09-08 00:01:58.258210 | orchestrator | 00:01:58.258 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=2ffbe484-1b92-53d5-1832-8c31f8a3887a] 2025-09-08 00:01:58.273158 | orchestrator | 00:01:58.272 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-08 00:01:58.273201 | orchestrator | 00:01:58.272 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-08 00:01:58.281060 | orchestrator | 00:01:58.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-08 00:01:58.284769 | orchestrator | 00:01:58.284 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-08 00:01:58.289797 | orchestrator | 00:01:58.287 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-08 00:01:58.290074 | orchestrator | 00:01:58.288 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-08 00:01:58.290201 | orchestrator | 00:01:58.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-08 00:01:58.290216 | orchestrator | 00:01:58.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-08 00:01:58.290243 | orchestrator | 00:01:58.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-08 00:01:58.290270 | orchestrator | 00:01:58.289 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-08 00:01:58.755043 | orchestrator | 00:01:58.754 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-08 00:01:58.770464 | orchestrator | 00:01:58.761 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-08 00:01:58.770554 | orchestrator | 00:01:58.770 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-08 00:01:58.774987 | orchestrator | 00:01:58.774 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-08 00:01:58.787226 | orchestrator | 00:01:58.787 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-08 00:01:58.796304 | orchestrator | 00:01:58.795 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-08 00:01:59.386560 | orchestrator | 00:01:59.386 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=a5665646-295d-4bc1-83e5-c782d0266f3f] 2025-09-08 00:01:59.394911 | orchestrator | 00:01:59.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-08 00:02:01.892457 | orchestrator | 00:02:01.890 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=8ee7eb97-103b-48c1-b599-577d77aa5f2d] 2025-09-08 00:02:01.899178 | orchestrator | 00:02:01.898 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-08 00:02:01.913745 | orchestrator | 00:02:01.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=d104b958-607f-4535-a6c3-7c5e10e43f98] 2025-09-08 00:02:01.917609 | orchestrator | 00:02:01.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-08 00:02:01.933891 | orchestrator | 00:02:01.933 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=71c81d38-851a-45a9-affe-242d84188eb5] 2025-09-08 00:02:01.938942 | orchestrator | 00:02:01.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-08 00:02:01.973333 | orchestrator | 00:02:01.973 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=0ed32d85-e4d7-46a8-b481-7cb7d466dd72] 2025-09-08 00:02:01.979690 | orchestrator | 00:02:01.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-08 00:02:01.984792 | orchestrator | 00:02:01.984 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=93f20ee1-aa44-492e-8fd6-2ddde0eec0c3] 2025-09-08 00:02:01.992251 | orchestrator | 00:02:01.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=b6d83665-6669-4f1a-a01e-1cb1a99e815e] 2025-09-08 00:02:01.992526 | orchestrator | 00:02:01.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-08 00:02:01.998464 | orchestrator | 00:02:01.998 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-08 00:02:02.025505 | orchestrator | 00:02:02.025 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=f2189477-3d04-4590-9bb4-080bdc335962] 2025-09-08 00:02:02.062455 | orchestrator | 00:02:02.062 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-08 00:02:02.068540 | orchestrator | 00:02:02.068 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=7ce0c10ef13c49e0a643a48844d44598152a508e] 2025-09-08 00:02:02.080032 | orchestrator | 00:02:02.079 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=4631f46e-eb61-4253-8eaf-0e479598f4cb] 2025-09-08 00:02:02.084605 | orchestrator | 00:02:02.082 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-08 00:02:02.088731 | orchestrator | 00:02:02.088 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-08 00:02:02.092696 | orchestrator | 00:02:02.092 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=da2ce335b93bc69d0a3b2c65d6dbe61e81608931] 2025-09-08 00:02:02.103624 | orchestrator | 00:02:02.103 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=bdc2c250-49e1-41fe-b0ad-7dd2c4789359] 2025-09-08 00:02:02.785110 | orchestrator | 00:02:02.784 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=da17a974-2052-4ef5-933e-f04448611c0e] 2025-09-08 00:02:03.104059 | orchestrator | 00:02:03.103 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=23d33958-1a73-4af4-9375-d4b41b0dfabd] 2025-09-08 00:02:03.111672 | orchestrator | 00:02:03.111 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-08 00:02:05.397926 | orchestrator | 00:02:05.397 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=5283f4eb-967a-45cb-9108-62eab8899a44] 2025-09-08 00:02:05.441884 | orchestrator | 00:02:05.441 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=f986762e-5135-4807-98e0-2a6dc6746cab] 2025-09-08 00:02:05.450613 | orchestrator | 00:02:05.450 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=5172d866-a36c-423d-97d0-17dd15bbbbb9] 2025-09-08 00:02:05.467241 | orchestrator | 00:02:05.466 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=ca7ccc28-8d09-4824-b2e3-b19f9e947096] 2025-09-08 00:02:06.126513 | orchestrator | 00:02:06.126 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8e80fb95-b903-4345-b413-f9f5e6f33612] 2025-09-08 00:02:06.180684 | orchestrator | 00:02:06.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=770c2301-18bb-4c29-9bb9-bab8a6016772] 2025-09-08 00:02:08.288261 | orchestrator | 00:02:08.287 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 5s [id=9b9603ac-cf73-423b-bcbe-3ab178f03ce4] 2025-09-08 00:02:08.294919 | orchestrator | 00:02:08.294 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-08 00:02:08.296253 | orchestrator | 00:02:08.296 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-08 00:02:08.307767 | orchestrator | 00:02:08.307 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-08 00:02:08.640945 | orchestrator | 00:02:08.640 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=53ec79ab-a3ac-4d39-85ba-8d9ddf7167f1] 2025-09-08 00:02:08.647889 | orchestrator | 00:02:08.647 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-08 00:02:08.651710 | orchestrator | 00:02:08.649 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-08 00:02:08.655450 | orchestrator | 00:02:08.655 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-08 00:02:08.660019 | orchestrator | 00:02:08.656 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-08 00:02:08.663754 | orchestrator | 00:02:08.663 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-08 00:02:08.663790 | orchestrator | 00:02:08.663 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-08 00:02:08.671635 | orchestrator | 00:02:08.670 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=fd57be5e-0935-40a6-b68a-2c2be6b9da2a] 2025-09-08 00:02:08.680796 | orchestrator | 00:02:08.680 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-08 00:02:08.682461 | orchestrator | 00:02:08.682 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-08 00:02:08.691379 | orchestrator | 00:02:08.691 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-08 00:02:08.872467 | orchestrator | 00:02:08.872 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=db4ef04b-79a4-460a-8a3c-70c7cac5f875] 2025-09-08 00:02:08.885005 | orchestrator | 00:02:08.884 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-08 00:02:09.137285 | orchestrator | 00:02:09.136 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=fbf654d0-f39f-4341-8d3f-c2050ed76c92] 2025-09-08 00:02:09.151457 | orchestrator | 00:02:09.150 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-08 00:02:09.295055 | orchestrator | 00:02:09.294 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=a55eb4f0-a2c0-4505-83c7-9b95e021a8b1] 2025-09-08 00:02:09.299872 | orchestrator | 00:02:09.299 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-08 00:02:09.881484 | orchestrator | 00:02:09.881 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=ed4b085b-abe6-4a74-b863-7f022facc4ce] 2025-09-08 00:02:09.899899 | orchestrator | 00:02:09.899 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-08 00:02:09.921601 | orchestrator | 00:02:09.917 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c112827d-6108-420c-b678-09a8350007a9] 2025-09-08 00:02:09.928111 | orchestrator | 00:02:09.927 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-08 00:02:09.935878 | orchestrator | 00:02:09.935 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=09640657-21c0-4b2a-9282-d3ec89b09186] 2025-09-08 00:02:09.946348 | orchestrator | 00:02:09.946 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-08 00:02:10.127392 | orchestrator | 00:02:10.125 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=56597186-3219-46ca-8424-4de991aaa242] 2025-09-08 00:02:10.132215 | orchestrator | 00:02:10.132 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-08 00:02:10.170797 | orchestrator | 00:02:10.170 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=2a4dc4bd-1f2e-45df-bbe9-8844fef7654a] 2025-09-08 00:02:10.313784 | orchestrator | 00:02:10.313 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=2407e9cb-256b-48bd-a247-230ed3e598f7] 2025-09-08 00:02:10.400898 | orchestrator | 00:02:10.400 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5b793880-dd4b-461d-bc10-16ee480197f1] 2025-09-08 00:02:10.442744 | orchestrator | 00:02:10.442 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=7e620f71-647f-446b-b106-36ecf67f67e6] 2025-09-08 00:02:10.511340 | orchestrator | 00:02:10.510 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=295d0d5c-bebf-487b-a897-6760e3e60c81] 2025-09-08 00:02:10.597026 | orchestrator | 00:02:10.596 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=8df3611a-d36e-4115-b6bb-bf41d74dd155] 2025-09-08 00:02:10.694897 | orchestrator | 00:02:10.694 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=79d4ef46-d0ff-43d0-bfef-d40754170b8f] 2025-09-08 00:02:10.871672 | orchestrator | 00:02:10.871 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=6cbfebcc-098c-440a-8e05-5953c5a564d1] 2025-09-08 00:02:11.159735 | orchestrator | 00:02:11.159 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=76316ffa-c19e-4bdb-a7ff-b4cd50982275] 2025-09-08 00:02:11.644619 | orchestrator | 00:02:11.644 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=31fb4cbc-5295-4c5c-bc92-c225390eeb67] 2025-09-08 00:02:11.670485 | orchestrator | 00:02:11.670 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-08 00:02:11.686787 | orchestrator | 00:02:11.686 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-08 00:02:11.686872 | orchestrator | 00:02:11.686 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-08 00:02:11.687570 | orchestrator | 00:02:11.687 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-08 00:02:11.691776 | orchestrator | 00:02:11.691 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-08 00:02:11.692541 | orchestrator | 00:02:11.692 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-08 00:02:11.706098 | orchestrator | 00:02:11.705 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-08 00:02:13.523546 | orchestrator | 00:02:13.523 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=e19be320-e407-433e-bdb1-351252348135] 2025-09-08 00:02:13.532518 | orchestrator | 00:02:13.532 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-08 00:02:13.538074 | orchestrator | 00:02:13.537 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-08 00:02:13.540288 | orchestrator | 00:02:13.540 STDOUT terraform: local_file.inventory: Creating... 2025-09-08 00:02:13.545862 | orchestrator | 00:02:13.545 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=47d65824ede72d91407adddca051b721e8e54327] 2025-09-08 00:02:13.546658 | orchestrator | 00:02:13.546 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=5f88db24b32b4fdc04553cdd217a5ae095a9deb5] 2025-09-08 00:02:14.259812 | orchestrator | 00:02:14.259 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=e19be320-e407-433e-bdb1-351252348135] 2025-09-08 00:02:21.692101 | orchestrator | 00:02:21.691 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-08 00:02:21.692232 | orchestrator | 00:02:21.692 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-08 00:02:21.695188 | orchestrator | 00:02:21.694 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-08 00:02:21.697387 | orchestrator | 00:02:21.697 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-08 00:02:21.697509 | orchestrator | 00:02:21.697 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-08 00:02:21.708026 | orchestrator | 00:02:21.707 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-08 00:02:31.692281 | orchestrator | 00:02:31.691 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-08 00:02:31.693186 | orchestrator | 00:02:31.692 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-08 00:02:31.695280 | orchestrator | 00:02:31.695 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-08 00:02:31.697436 | orchestrator | 00:02:31.697 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-08 00:02:31.697578 | orchestrator | 00:02:31.697 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-08 00:02:31.708697 | orchestrator | 00:02:31.708 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-08 00:02:32.300038 | orchestrator | 00:02:32.299 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=1071d9dc-f077-426b-b245-ab685f56a287] 2025-09-08 00:02:41.692503 | orchestrator | 00:02:41.692 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-08 00:02:41.693625 | orchestrator | 00:02:41.693 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-08 00:02:41.695780 | orchestrator | 00:02:41.695 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-08 00:02:41.698139 | orchestrator | 00:02:41.697 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-08 00:02:41.709370 | orchestrator | 00:02:41.709 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-08 00:02:42.347875 | orchestrator | 00:02:42.347 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=3842bb5e-2be6-4a25-a756-6a0cda633b2d] 2025-09-08 00:02:51.696310 | orchestrator | 00:02:51.694 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-09-08 00:02:51.696771 | orchestrator | 00:02:51.696 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-09-08 00:02:51.699028 | orchestrator | 00:02:51.698 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-09-08 00:02:51.710396 | orchestrator | 00:02:51.710 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-09-08 00:02:52.342213 | orchestrator | 00:02:52.340 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 40s [id=d8625f32-8a5e-4c1b-98b8-bcce593ec813] 2025-09-08 00:02:52.421120 | orchestrator | 00:02:52.420 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 40s [id=4933b95e-b99d-48e5-a292-2b3395393884] 2025-09-08 00:02:52.701713 | orchestrator | 00:02:52.701 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=60176b77-bf2c-4aed-9fa8-b304a46ede91] 2025-09-08 00:02:52.748330 | orchestrator | 00:02:52.748 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=c08c5ce1-65ff-4dcd-8c15-7abae29bbc2c] 2025-09-08 00:02:52.772333 | orchestrator | 00:02:52.772 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-08 00:02:52.773233 | orchestrator | 00:02:52.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-08 00:02:52.774094 | orchestrator | 00:02:52.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-08 00:02:52.781886 | orchestrator | 00:02:52.781 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-08 00:02:52.787082 | orchestrator | 00:02:52.786 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1755729684405028787] 2025-09-08 00:02:52.791957 | orchestrator | 00:02:52.791 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-08 00:02:52.792462 | orchestrator | 00:02:52.792 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-08 00:02:52.811610 | orchestrator | 00:02:52.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-08 00:02:52.811662 | orchestrator | 00:02:52.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-08 00:02:52.811668 | orchestrator | 00:02:52.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-08 00:02:52.811711 | orchestrator | 00:02:52.811 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-08 00:02:52.811766 | orchestrator | 00:02:52.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-08 00:02:56.421647 | orchestrator | 00:02:56.421 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=1071d9dc-f077-426b-b245-ab685f56a287/93f20ee1-aa44-492e-8fd6-2ddde0eec0c3] 2025-09-08 00:02:56.454781 | orchestrator | 00:02:56.454 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=4933b95e-b99d-48e5-a292-2b3395393884/f2189477-3d04-4590-9bb4-080bdc335962] 2025-09-08 00:02:56.463516 | orchestrator | 00:02:56.463 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=60176b77-bf2c-4aed-9fa8-b304a46ede91/0ed32d85-e4d7-46a8-b481-7cb7d466dd72] 2025-09-08 00:03:02.560653 | orchestrator | 00:03:02.560 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=1071d9dc-f077-426b-b245-ab685f56a287/71c81d38-851a-45a9-affe-242d84188eb5] 2025-09-08 00:03:02.570313 | orchestrator | 00:03:02.569 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=60176b77-bf2c-4aed-9fa8-b304a46ede91/d104b958-607f-4535-a6c3-7c5e10e43f98] 2025-09-08 00:03:02.594124 | orchestrator | 00:03:02.593 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=4933b95e-b99d-48e5-a292-2b3395393884/b6d83665-6669-4f1a-a01e-1cb1a99e815e] 2025-09-08 00:03:02.618694 | orchestrator | 00:03:02.618 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=60176b77-bf2c-4aed-9fa8-b304a46ede91/bdc2c250-49e1-41fe-b0ad-7dd2c4789359] 2025-09-08 00:03:02.648230 | orchestrator | 00:03:02.647 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=1071d9dc-f077-426b-b245-ab685f56a287/4631f46e-eb61-4253-8eaf-0e479598f4cb] 2025-09-08 00:03:02.651248 | orchestrator | 00:03:02.650 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=4933b95e-b99d-48e5-a292-2b3395393884/8ee7eb97-103b-48c1-b599-577d77aa5f2d] 2025-09-08 00:03:02.812572 | orchestrator | 00:03:02.812 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-08 00:03:12.813686 | orchestrator | 00:03:12.813 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-08 00:03:13.196362 | orchestrator | 00:03:13.195 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=8df5545c-7c16-4f38-a849-0bd3f53e2bf4] 2025-09-08 00:03:13.218610 | orchestrator | 00:03:13.218 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-08 00:03:13.218669 | orchestrator | 00:03:13.218 STDOUT terraform: Outputs: 2025-09-08 00:03:13.218679 | orchestrator | 00:03:13.218 STDOUT terraform: manager_address = 2025-09-08 00:03:13.218687 | orchestrator | 00:03:13.218 STDOUT terraform: private_key = 2025-09-08 00:03:13.295277 | orchestrator | ok: Runtime: 0:01:21.543931 2025-09-08 00:03:13.319116 | 2025-09-08 00:03:13.319233 | TASK [Fetch manager address] 2025-09-08 00:03:13.746372 | orchestrator | ok 2025-09-08 00:03:13.757209 | 2025-09-08 00:03:13.757404 | TASK [Set manager_host address] 2025-09-08 00:03:13.840029 | orchestrator | ok 2025-09-08 00:03:13.849512 | 2025-09-08 00:03:13.849692 | LOOP [Update ansible collections] 2025-09-08 00:03:23.853469 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-08 00:03:23.854033 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:03:23.854105 | orchestrator | Starting galaxy collection install process 2025-09-08 00:03:23.854218 | orchestrator | Process install dependency map 2025-09-08 00:03:23.854251 | orchestrator | Starting collection install process 2025-09-08 00:03:23.854278 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-09-08 00:03:23.854313 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-09-08 00:03:23.854345 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-08 00:03:23.854410 | orchestrator | ok: Item: commons Runtime: 0:00:09.631888 2025-09-08 00:03:30.208825 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-08 00:03:30.209012 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:03:30.209508 | orchestrator | Starting galaxy collection install process 2025-09-08 00:03:30.209619 | orchestrator | Process install dependency map 2025-09-08 00:03:30.209662 | orchestrator | Starting collection install process 2025-09-08 00:03:30.209753 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-09-08 00:03:30.209781 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-09-08 00:03:30.209822 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-08 00:03:30.210917 | orchestrator | ok: Item: services Runtime: 0:00:06.084161 2025-09-08 00:03:30.228925 | 2025-09-08 00:03:30.229040 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-08 00:03:40.797814 | orchestrator | ok 2025-09-08 00:03:40.808734 | 2025-09-08 00:03:40.808878 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-08 00:04:40.859203 | orchestrator | ok 2025-09-08 00:04:40.870260 | 2025-09-08 00:04:40.870424 | TASK [Fetch manager ssh hostkey] 2025-09-08 00:04:42.447651 | orchestrator | Output suppressed because no_log was given 2025-09-08 00:04:42.463216 | 2025-09-08 00:04:42.463384 | TASK [Get ssh keypair from terraform environment] 2025-09-08 00:04:42.999918 | orchestrator | ok: Runtime: 0:00:00.008952 2025-09-08 00:04:43.018277 | 2025-09-08 00:04:43.018464 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-08 00:04:43.065262 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-08 00:04:43.074596 | 2025-09-08 00:04:43.074712 | TASK [Run manager part 0] 2025-09-08 00:04:45.309074 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:04:45.793548 | orchestrator | 2025-09-08 00:04:45.793617 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-08 00:04:45.793627 | orchestrator | 2025-09-08 00:04:45.793647 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-08 00:04:47.717274 | orchestrator | ok: [testbed-manager] 2025-09-08 00:04:47.717370 | orchestrator | 2025-09-08 00:04:47.717403 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-08 00:04:47.717416 | orchestrator | 2025-09-08 00:04:47.717429 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:04:50.362853 | orchestrator | ok: [testbed-manager] 2025-09-08 00:04:50.362913 | orchestrator | 2025-09-08 00:04:50.362923 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-08 00:04:51.015325 | orchestrator | ok: [testbed-manager] 2025-09-08 00:04:51.015407 | orchestrator | 2025-09-08 00:04:51.015427 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-08 00:04:51.067290 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.067354 | orchestrator | 2025-09-08 00:04:51.067365 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-08 00:04:51.094600 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.094641 | orchestrator | 2025-09-08 00:04:51.094648 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-08 00:04:51.123289 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.123327 | orchestrator | 2025-09-08 00:04:51.123376 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-08 00:04:51.150617 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.150667 | orchestrator | 2025-09-08 00:04:51.150677 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-08 00:04:51.186751 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.186791 | orchestrator | 2025-09-08 00:04:51.186799 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-08 00:04:51.221939 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.221982 | orchestrator | 2025-09-08 00:04:51.221990 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-08 00:04:51.249386 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:04:51.249443 | orchestrator | 2025-09-08 00:04:51.249455 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-08 00:04:52.008551 | orchestrator | changed: [testbed-manager] 2025-09-08 00:04:52.008599 | orchestrator | 2025-09-08 00:04:52.008606 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-08 00:07:53.618632 | orchestrator | changed: [testbed-manager] 2025-09-08 00:07:53.618707 | orchestrator | 2025-09-08 00:07:53.618724 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-08 00:09:26.815285 | orchestrator | changed: [testbed-manager] 2025-09-08 00:09:26.815389 | orchestrator | 2025-09-08 00:09:26.815408 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-08 00:09:55.084918 | orchestrator | changed: [testbed-manager] 2025-09-08 00:09:55.085021 | orchestrator | 2025-09-08 00:09:55.085041 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-08 00:10:06.171124 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:06.171265 | orchestrator | 2025-09-08 00:10:06.171284 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-08 00:10:06.214577 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:06.214637 | orchestrator | 2025-09-08 00:10:06.214651 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-08 00:10:06.955889 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:06.955925 | orchestrator | 2025-09-08 00:10:06.955935 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-08 00:10:07.634296 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:07.634369 | orchestrator | 2025-09-08 00:10:07.634385 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-08 00:10:14.424159 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:14.424217 | orchestrator | 2025-09-08 00:10:14.424246 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-08 00:10:20.456649 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:20.456748 | orchestrator | 2025-09-08 00:10:20.456770 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-08 00:10:23.157957 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:23.158048 | orchestrator | 2025-09-08 00:10:23.158060 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-08 00:10:24.988900 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:24.988967 | orchestrator | 2025-09-08 00:10:24.988981 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-08 00:10:26.079876 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-08 00:10:26.079967 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-08 00:10:26.079981 | orchestrator | 2025-09-08 00:10:26.079994 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-08 00:10:26.118683 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-08 00:10:26.118749 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-08 00:10:26.118763 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-08 00:10:26.118775 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-08 00:10:30.058560 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-08 00:10:30.058657 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-08 00:10:30.058672 | orchestrator | 2025-09-08 00:10:30.058685 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-08 00:10:30.651126 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:30.651217 | orchestrator | 2025-09-08 00:10:30.651232 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-08 00:10:52.276835 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-08 00:10:52.276928 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-08 00:10:52.276945 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-08 00:10:52.276958 | orchestrator | 2025-09-08 00:10:52.276970 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-08 00:10:54.585819 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-08 00:10:54.585910 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-08 00:10:54.585928 | orchestrator | 2025-09-08 00:10:54.585940 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-08 00:10:54.585953 | orchestrator | 2025-09-08 00:10:54.585964 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:10:56.017148 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:56.017201 | orchestrator | 2025-09-08 00:10:56.017210 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-08 00:10:56.058769 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:56.058822 | orchestrator | 2025-09-08 00:10:56.058829 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-08 00:10:56.130312 | orchestrator | ok: [testbed-manager] 2025-09-08 00:10:56.130353 | orchestrator | 2025-09-08 00:10:56.130361 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-08 00:10:56.905383 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:56.905426 | orchestrator | 2025-09-08 00:10:56.905434 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-08 00:10:57.626463 | orchestrator | changed: [testbed-manager] 2025-09-08 00:10:57.626511 | orchestrator | 2025-09-08 00:10:57.626520 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-08 00:10:59.010883 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-08 00:10:59.010927 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-08 00:10:59.010932 | orchestrator | 2025-09-08 00:10:59.010947 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-08 00:11:00.420756 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:00.420798 | orchestrator | 2025-09-08 00:11:00.420804 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-08 00:11:02.117274 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:11:02.117309 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-08 00:11:02.117317 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:11:02.117323 | orchestrator | 2025-09-08 00:11:02.117331 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-08 00:11:02.178235 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:02.178270 | orchestrator | 2025-09-08 00:11:02.178279 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-08 00:11:02.741339 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:02.741375 | orchestrator | 2025-09-08 00:11:02.741384 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-08 00:11:02.808639 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:02.808673 | orchestrator | 2025-09-08 00:11:02.808682 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-08 00:11:03.683366 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:11:03.683440 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:03.683456 | orchestrator | 2025-09-08 00:11:03.683469 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-08 00:11:03.718531 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:03.718611 | orchestrator | 2025-09-08 00:11:03.718627 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-08 00:11:03.749446 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:03.749508 | orchestrator | 2025-09-08 00:11:03.749525 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-08 00:11:03.780231 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:03.780285 | orchestrator | 2025-09-08 00:11:03.780298 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-08 00:11:03.820483 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:03.820558 | orchestrator | 2025-09-08 00:11:03.820575 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-08 00:11:04.550897 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:04.550970 | orchestrator | 2025-09-08 00:11:04.550986 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-08 00:11:04.550999 | orchestrator | 2025-09-08 00:11:04.551011 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:11:05.991683 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:05.991730 | orchestrator | 2025-09-08 00:11:05.991737 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-08 00:11:06.956232 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:06.956287 | orchestrator | 2025-09-08 00:11:06.956298 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:11:06.956308 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-08 00:11:06.956315 | orchestrator | 2025-09-08 00:11:07.331956 | orchestrator | ok: Runtime: 0:06:23.678146 2025-09-08 00:11:07.350134 | 2025-09-08 00:11:07.350275 | TASK [Point out that the log in on the manager is now possible] 2025-09-08 00:11:07.386340 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-08 00:11:07.395118 | 2025-09-08 00:11:07.395714 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-08 00:11:07.430789 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-08 00:11:07.440675 | 2025-09-08 00:11:07.440813 | TASK [Run manager part 1 + 2] 2025-09-08 00:11:08.182040 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-08 00:11:08.225749 | orchestrator | 2025-09-08 00:11:08.225782 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-08 00:11:08.225789 | orchestrator | 2025-09-08 00:11:08.225800 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:11:10.666648 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:10.666690 | orchestrator | 2025-09-08 00:11:10.666713 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-08 00:11:10.698061 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:10.698097 | orchestrator | 2025-09-08 00:11:10.698106 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-08 00:11:10.734610 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:10.734642 | orchestrator | 2025-09-08 00:11:10.734654 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-08 00:11:10.766810 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:10.766843 | orchestrator | 2025-09-08 00:11:10.766853 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-08 00:11:10.820705 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:10.820741 | orchestrator | 2025-09-08 00:11:10.820751 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-08 00:11:10.874091 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:10.874131 | orchestrator | 2025-09-08 00:11:10.874140 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-08 00:11:10.911849 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-08 00:11:10.911878 | orchestrator | 2025-09-08 00:11:10.911883 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-08 00:11:11.589832 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:11.589890 | orchestrator | 2025-09-08 00:11:11.589901 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-08 00:11:11.641387 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:11.641433 | orchestrator | 2025-09-08 00:11:11.641441 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-08 00:11:12.939226 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:12.939275 | orchestrator | 2025-09-08 00:11:12.939286 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-08 00:11:13.491888 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:13.491937 | orchestrator | 2025-09-08 00:11:13.491947 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-08 00:11:14.526993 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:14.527040 | orchestrator | 2025-09-08 00:11:14.527051 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-08 00:11:31.905256 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:31.905365 | orchestrator | 2025-09-08 00:11:31.905383 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-08 00:11:32.573709 | orchestrator | ok: [testbed-manager] 2025-09-08 00:11:32.573848 | orchestrator | 2025-09-08 00:11:32.573862 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-08 00:11:32.627824 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:32.627891 | orchestrator | 2025-09-08 00:11:32.627906 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-08 00:11:33.580671 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:33.580756 | orchestrator | 2025-09-08 00:11:33.580773 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-08 00:11:34.546912 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:34.547000 | orchestrator | 2025-09-08 00:11:34.547020 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-08 00:11:35.126906 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:35.126991 | orchestrator | 2025-09-08 00:11:35.127010 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-08 00:11:35.175842 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-08 00:11:35.175904 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-08 00:11:35.175910 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-08 00:11:35.175915 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-08 00:11:38.728529 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:38.728652 | orchestrator | 2025-09-08 00:11:38.728670 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-08 00:11:47.928685 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-08 00:11:47.928747 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-08 00:11:47.928757 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-08 00:11:47.928765 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-08 00:11:47.928776 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-08 00:11:47.928783 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-08 00:11:47.928789 | orchestrator | 2025-09-08 00:11:47.928797 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-08 00:11:48.979093 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:48.979159 | orchestrator | 2025-09-08 00:11:48.979175 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-08 00:11:49.021224 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:49.021299 | orchestrator | 2025-09-08 00:11:49.021316 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-08 00:11:52.249920 | orchestrator | changed: [testbed-manager] 2025-09-08 00:11:52.249967 | orchestrator | 2025-09-08 00:11:52.249976 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-08 00:11:52.288846 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:11:52.288919 | orchestrator | 2025-09-08 00:11:52.288934 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-08 00:13:34.486078 | orchestrator | changed: [testbed-manager] 2025-09-08 00:13:34.486177 | orchestrator | 2025-09-08 00:13:34.486196 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-08 00:13:35.656632 | orchestrator | ok: [testbed-manager] 2025-09-08 00:13:35.656672 | orchestrator | 2025-09-08 00:13:35.656681 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:13:35.656688 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-08 00:13:35.656693 | orchestrator | 2025-09-08 00:13:36.069227 | orchestrator | ok: Runtime: 0:02:28.020653 2025-09-08 00:13:36.085127 | 2025-09-08 00:13:36.085263 | TASK [Reboot manager] 2025-09-08 00:13:37.620306 | orchestrator | ok: Runtime: 0:00:00.958121 2025-09-08 00:13:37.631427 | 2025-09-08 00:13:37.631554 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-08 00:13:54.030018 | orchestrator | ok 2025-09-08 00:13:54.039116 | 2025-09-08 00:13:54.039239 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-08 00:14:54.088678 | orchestrator | ok 2025-09-08 00:14:54.099974 | 2025-09-08 00:14:54.100130 | TASK [Deploy manager + bootstrap nodes] 2025-09-08 00:14:56.815576 | orchestrator | 2025-09-08 00:14:56.815812 | orchestrator | # DEPLOY MANAGER 2025-09-08 00:14:56.815834 | orchestrator | 2025-09-08 00:14:56.815848 | orchestrator | + set -e 2025-09-08 00:14:56.815861 | orchestrator | + echo 2025-09-08 00:14:56.815875 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-08 00:14:56.815891 | orchestrator | + echo 2025-09-08 00:14:56.815942 | orchestrator | + cat /opt/manager-vars.sh 2025-09-08 00:14:56.818908 | orchestrator | export NUMBER_OF_NODES=6 2025-09-08 00:14:56.818932 | orchestrator | 2025-09-08 00:14:56.818944 | orchestrator | export CEPH_VERSION=reef 2025-09-08 00:14:56.818957 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-08 00:14:56.818969 | orchestrator | export MANAGER_VERSION=9.2.0 2025-09-08 00:14:56.818991 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-08 00:14:56.819002 | orchestrator | 2025-09-08 00:14:56.819020 | orchestrator | export ARA=false 2025-09-08 00:14:56.819031 | orchestrator | export DEPLOY_MODE=manager 2025-09-08 00:14:56.819049 | orchestrator | export TEMPEST=true 2025-09-08 00:14:56.819060 | orchestrator | export IS_ZUUL=true 2025-09-08 00:14:56.819071 | orchestrator | 2025-09-08 00:14:56.819088 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-08 00:14:56.819100 | orchestrator | export EXTERNAL_API=false 2025-09-08 00:14:56.819110 | orchestrator | 2025-09-08 00:14:56.819121 | orchestrator | export IMAGE_USER=ubuntu 2025-09-08 00:14:56.819135 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:56.819146 | orchestrator | 2025-09-08 00:14:56.819157 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-08 00:14:56.819172 | orchestrator | 2025-09-08 00:14:56.819183 | orchestrator | + echo 2025-09-08 00:14:56.819199 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-08 00:14:56.820205 | orchestrator | ++ export INTERACTIVE=false 2025-09-08 00:14:56.820222 | orchestrator | ++ INTERACTIVE=false 2025-09-08 00:14:56.820237 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-08 00:14:56.820248 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-08 00:14:56.820570 | orchestrator | + source /opt/manager-vars.sh 2025-09-08 00:14:56.820948 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-08 00:14:56.820963 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-08 00:14:56.820974 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-08 00:14:56.820985 | orchestrator | ++ CEPH_VERSION=reef 2025-09-08 00:14:56.820995 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-08 00:14:56.821006 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-08 00:14:56.821017 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-08 00:14:56.821027 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-08 00:14:56.821038 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-08 00:14:56.821056 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-08 00:14:56.821067 | orchestrator | ++ export ARA=false 2025-09-08 00:14:56.821078 | orchestrator | ++ ARA=false 2025-09-08 00:14:56.821089 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-08 00:14:56.821099 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-08 00:14:56.821110 | orchestrator | ++ export TEMPEST=true 2025-09-08 00:14:56.821120 | orchestrator | ++ TEMPEST=true 2025-09-08 00:14:56.821131 | orchestrator | ++ export IS_ZUUL=true 2025-09-08 00:14:56.821142 | orchestrator | ++ IS_ZUUL=true 2025-09-08 00:14:56.821153 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-08 00:14:56.821163 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-08 00:14:56.821178 | orchestrator | ++ export EXTERNAL_API=false 2025-09-08 00:14:56.821189 | orchestrator | ++ EXTERNAL_API=false 2025-09-08 00:14:56.821200 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-08 00:14:56.821210 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-08 00:14:56.821221 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:56.821232 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:56.821243 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-08 00:14:56.821254 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-08 00:14:56.821268 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-08 00:14:56.884128 | orchestrator | + docker version 2025-09-08 00:14:57.153740 | orchestrator | Client: Docker Engine - Community 2025-09-08 00:14:57.153835 | orchestrator | Version: 27.5.1 2025-09-08 00:14:57.153847 | orchestrator | API version: 1.47 2025-09-08 00:14:57.153860 | orchestrator | Go version: go1.22.11 2025-09-08 00:14:57.153870 | orchestrator | Git commit: 9f9e405 2025-09-08 00:14:57.153881 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-08 00:14:57.153893 | orchestrator | OS/Arch: linux/amd64 2025-09-08 00:14:57.153904 | orchestrator | Context: default 2025-09-08 00:14:57.153914 | orchestrator | 2025-09-08 00:14:57.153926 | orchestrator | Server: Docker Engine - Community 2025-09-08 00:14:57.153937 | orchestrator | Engine: 2025-09-08 00:14:57.153958 | orchestrator | Version: 27.5.1 2025-09-08 00:14:57.153970 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-08 00:14:57.154007 | orchestrator | Go version: go1.22.11 2025-09-08 00:14:57.154065 | orchestrator | Git commit: 4c9b3b0 2025-09-08 00:14:57.154077 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-08 00:14:57.154088 | orchestrator | OS/Arch: linux/amd64 2025-09-08 00:14:57.154099 | orchestrator | Experimental: false 2025-09-08 00:14:57.154109 | orchestrator | containerd: 2025-09-08 00:14:57.154120 | orchestrator | Version: 1.7.27 2025-09-08 00:14:57.154132 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-08 00:14:57.154143 | orchestrator | runc: 2025-09-08 00:14:57.154154 | orchestrator | Version: 1.2.5 2025-09-08 00:14:57.154165 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-08 00:14:57.154176 | orchestrator | docker-init: 2025-09-08 00:14:57.154191 | orchestrator | Version: 0.19.0 2025-09-08 00:14:57.154203 | orchestrator | GitCommit: de40ad0 2025-09-08 00:14:57.158142 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-08 00:14:57.168883 | orchestrator | + set -e 2025-09-08 00:14:57.168921 | orchestrator | + source /opt/manager-vars.sh 2025-09-08 00:14:57.168941 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-08 00:14:57.168962 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-08 00:14:57.168983 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-08 00:14:57.169004 | orchestrator | ++ CEPH_VERSION=reef 2025-09-08 00:14:57.169025 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-08 00:14:57.169045 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-08 00:14:57.169066 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-08 00:14:57.169085 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-08 00:14:57.169105 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-08 00:14:57.169123 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-08 00:14:57.169138 | orchestrator | ++ export ARA=false 2025-09-08 00:14:57.169149 | orchestrator | ++ ARA=false 2025-09-08 00:14:57.169160 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-08 00:14:57.169179 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-08 00:14:57.169198 | orchestrator | ++ export TEMPEST=true 2025-09-08 00:14:57.169217 | orchestrator | ++ TEMPEST=true 2025-09-08 00:14:57.169236 | orchestrator | ++ export IS_ZUUL=true 2025-09-08 00:14:57.169255 | orchestrator | ++ IS_ZUUL=true 2025-09-08 00:14:57.169267 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-08 00:14:57.169278 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-08 00:14:57.169288 | orchestrator | ++ export EXTERNAL_API=false 2025-09-08 00:14:57.169299 | orchestrator | ++ EXTERNAL_API=false 2025-09-08 00:14:57.169310 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-08 00:14:57.169320 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-08 00:14:57.169331 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:57.169341 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-08 00:14:57.169353 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-08 00:14:57.169363 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-08 00:14:57.169374 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-08 00:14:57.169390 | orchestrator | ++ export INTERACTIVE=false 2025-09-08 00:14:57.169401 | orchestrator | ++ INTERACTIVE=false 2025-09-08 00:14:57.169412 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-08 00:14:57.169426 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-08 00:14:57.169437 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-08 00:14:57.169447 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-09-08 00:14:57.177437 | orchestrator | + set -e 2025-09-08 00:14:57.177465 | orchestrator | + VERSION=9.2.0 2025-09-08 00:14:57.177478 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-09-08 00:14:57.185651 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-08 00:14:57.185676 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-08 00:14:57.190722 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-08 00:14:57.195757 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-09-08 00:14:57.205485 | orchestrator | /opt/configuration ~ 2025-09-08 00:14:57.205533 | orchestrator | + set -e 2025-09-08 00:14:57.205546 | orchestrator | + pushd /opt/configuration 2025-09-08 00:14:57.205557 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-08 00:14:57.219969 | orchestrator | + source /opt/venv/bin/activate 2025-09-08 00:14:57.221911 | orchestrator | ++ deactivate nondestructive 2025-09-08 00:14:57.221933 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:57.221948 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:57.221985 | orchestrator | ++ hash -r 2025-09-08 00:14:57.221997 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:57.222007 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-08 00:14:57.222066 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-08 00:14:57.222079 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-08 00:14:57.222090 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-08 00:14:57.222101 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-08 00:14:57.222111 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-08 00:14:57.222122 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-08 00:14:57.222133 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:14:57.222145 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:14:57.222156 | orchestrator | ++ export PATH 2025-09-08 00:14:57.222172 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:14:57.222183 | orchestrator | ++ '[' -z '' ']' 2025-09-08 00:14:57.222194 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-08 00:14:57.222205 | orchestrator | ++ PS1='(venv) ' 2025-09-08 00:14:57.222215 | orchestrator | ++ export PS1 2025-09-08 00:14:57.222226 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-08 00:14:57.222236 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-08 00:14:57.222247 | orchestrator | ++ hash -r 2025-09-08 00:14:57.222258 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-09-08 00:14:58.433298 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-09-08 00:14:58.434061 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-09-08 00:14:58.435411 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-09-08 00:14:58.436728 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-09-08 00:14:58.437927 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-09-08 00:14:58.448257 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-09-08 00:14:58.449891 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-09-08 00:14:58.450944 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-09-08 00:14:58.452181 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-09-08 00:14:58.485185 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-09-08 00:14:58.486952 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-09-08 00:14:58.488737 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-09-08 00:14:58.490145 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-09-08 00:14:58.494769 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-09-08 00:14:58.706832 | orchestrator | ++ which gilt 2025-09-08 00:14:58.710826 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-09-08 00:14:58.710851 | orchestrator | + /opt/venv/bin/gilt overlay 2025-09-08 00:14:58.972956 | orchestrator | osism.cfg-generics: 2025-09-08 00:14:59.147430 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-09-08 00:14:59.147532 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-09-08 00:14:59.147547 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-09-08 00:14:59.147560 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-09-08 00:14:59.869854 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-09-08 00:14:59.879988 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-09-08 00:15:00.237015 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-09-08 00:15:00.294413 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-08 00:15:00.294483 | orchestrator | + deactivate 2025-09-08 00:15:00.294498 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-08 00:15:00.294511 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:15:00.294523 | orchestrator | + export PATH 2025-09-08 00:15:00.294534 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-08 00:15:00.294545 | orchestrator | + '[' -n '' ']' 2025-09-08 00:15:00.294558 | orchestrator | + hash -r 2025-09-08 00:15:00.294569 | orchestrator | ~ 2025-09-08 00:15:00.294580 | orchestrator | + '[' -n '' ']' 2025-09-08 00:15:00.294590 | orchestrator | + unset VIRTUAL_ENV 2025-09-08 00:15:00.294601 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-08 00:15:00.294612 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-08 00:15:00.294655 | orchestrator | + unset -f deactivate 2025-09-08 00:15:00.294667 | orchestrator | + popd 2025-09-08 00:15:00.296427 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-09-08 00:15:00.296463 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-08 00:15:00.297372 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-08 00:15:00.347061 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-08 00:15:00.347132 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-08 00:15:00.347147 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-08 00:15:00.439249 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-08 00:15:00.439291 | orchestrator | + source /opt/venv/bin/activate 2025-09-08 00:15:00.439303 | orchestrator | ++ deactivate nondestructive 2025-09-08 00:15:00.439322 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:15:00.439334 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:15:00.439345 | orchestrator | ++ hash -r 2025-09-08 00:15:00.439356 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:15:00.439367 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-08 00:15:00.439378 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-08 00:15:00.439392 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-08 00:15:00.439870 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-08 00:15:00.439886 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-08 00:15:00.439897 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-08 00:15:00.439912 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-08 00:15:00.439925 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:15:00.440039 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:15:00.440077 | orchestrator | ++ export PATH 2025-09-08 00:15:00.440093 | orchestrator | ++ '[' -n '' ']' 2025-09-08 00:15:00.440105 | orchestrator | ++ '[' -z '' ']' 2025-09-08 00:15:00.440115 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-08 00:15:00.440264 | orchestrator | ++ PS1='(venv) ' 2025-09-08 00:15:00.440279 | orchestrator | ++ export PS1 2025-09-08 00:15:00.440290 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-08 00:15:00.440301 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-08 00:15:00.440311 | orchestrator | ++ hash -r 2025-09-08 00:15:00.440504 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-08 00:15:01.726000 | orchestrator | 2025-09-08 00:15:01.726194 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-08 00:15:01.726212 | orchestrator | 2025-09-08 00:15:01.726225 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-08 00:15:02.343909 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:02.344038 | orchestrator | 2025-09-08 00:15:02.344054 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-08 00:15:03.387225 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:03.387334 | orchestrator | 2025-09-08 00:15:03.387349 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-08 00:15:03.387361 | orchestrator | 2025-09-08 00:15:03.387371 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:15:05.683211 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:05.683304 | orchestrator | 2025-09-08 00:15:05.683319 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-08 00:15:05.739826 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:05.739881 | orchestrator | 2025-09-08 00:15:05.739893 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-08 00:15:06.225108 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:06.225177 | orchestrator | 2025-09-08 00:15:06.225193 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-08 00:15:06.265029 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:06.265057 | orchestrator | 2025-09-08 00:15:06.265069 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-08 00:15:06.612268 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:06.612341 | orchestrator | 2025-09-08 00:15:06.612354 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-08 00:15:06.671289 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:06.671338 | orchestrator | 2025-09-08 00:15:06.671350 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-08 00:15:06.999031 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:06.999093 | orchestrator | 2025-09-08 00:15:06.999105 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-08 00:15:07.119477 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:07.119513 | orchestrator | 2025-09-08 00:15:07.119525 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-08 00:15:07.119536 | orchestrator | 2025-09-08 00:15:07.119548 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:15:08.877520 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:08.877607 | orchestrator | 2025-09-08 00:15:08.877621 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-08 00:15:09.011952 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-08 00:15:09.011984 | orchestrator | 2025-09-08 00:15:09.011996 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-08 00:15:09.081092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-08 00:15:09.081169 | orchestrator | 2025-09-08 00:15:09.081182 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-08 00:15:10.238226 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-08 00:15:10.238316 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-08 00:15:10.238332 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-08 00:15:10.238344 | orchestrator | 2025-09-08 00:15:10.238356 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-08 00:15:12.128290 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-08 00:15:12.128387 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-08 00:15:12.128401 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-08 00:15:12.128414 | orchestrator | 2025-09-08 00:15:12.128426 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-08 00:15:12.804520 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:15:12.804614 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:12.804678 | orchestrator | 2025-09-08 00:15:12.804692 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-08 00:15:13.471163 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:15:13.471261 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:13.471276 | orchestrator | 2025-09-08 00:15:13.471289 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-08 00:15:13.530952 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:13.530979 | orchestrator | 2025-09-08 00:15:13.530991 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-08 00:15:13.905730 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:13.906587 | orchestrator | 2025-09-08 00:15:13.906654 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-08 00:15:13.989793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-08 00:15:13.989849 | orchestrator | 2025-09-08 00:15:13.989864 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-08 00:15:15.061208 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:15.061308 | orchestrator | 2025-09-08 00:15:15.061323 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-08 00:15:15.876450 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:15.876549 | orchestrator | 2025-09-08 00:15:15.876564 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-08 00:15:27.691874 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:27.691991 | orchestrator | 2025-09-08 00:15:27.692028 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-08 00:15:27.754584 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:27.754692 | orchestrator | 2025-09-08 00:15:27.754708 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-08 00:15:27.754721 | orchestrator | 2025-09-08 00:15:27.754732 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:15:29.531484 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:29.531594 | orchestrator | 2025-09-08 00:15:29.531610 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-08 00:15:29.654252 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-08 00:15:29.654337 | orchestrator | 2025-09-08 00:15:29.654353 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-08 00:15:29.715034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:15:29.715099 | orchestrator | 2025-09-08 00:15:29.715113 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-08 00:15:32.425196 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:32.425307 | orchestrator | 2025-09-08 00:15:32.425324 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-08 00:15:32.483033 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:32.483092 | orchestrator | 2025-09-08 00:15:32.483104 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-08 00:15:32.623516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-08 00:15:32.623573 | orchestrator | 2025-09-08 00:15:32.623587 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-08 00:15:35.533242 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-08 00:15:35.533349 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-08 00:15:35.533359 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-08 00:15:35.533366 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-08 00:15:35.533373 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-08 00:15:35.533380 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-08 00:15:35.533386 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-08 00:15:35.533393 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-08 00:15:35.533399 | orchestrator | 2025-09-08 00:15:35.533409 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-08 00:15:36.239165 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:36.239276 | orchestrator | 2025-09-08 00:15:36.239293 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-08 00:15:36.926137 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:36.926249 | orchestrator | 2025-09-08 00:15:36.926264 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-08 00:15:37.004962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-08 00:15:37.005025 | orchestrator | 2025-09-08 00:15:37.005040 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-08 00:15:38.386301 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-08 00:15:38.386409 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-08 00:15:38.386424 | orchestrator | 2025-09-08 00:15:38.386436 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-08 00:15:39.116169 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:39.116279 | orchestrator | 2025-09-08 00:15:39.116294 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-08 00:15:39.183306 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:39.183383 | orchestrator | 2025-09-08 00:15:39.183396 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-08 00:15:39.264370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-08 00:15:39.264478 | orchestrator | 2025-09-08 00:15:39.264494 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-08 00:15:39.942523 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:39.942685 | orchestrator | 2025-09-08 00:15:39.942702 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-08 00:15:39.998552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-08 00:15:39.998592 | orchestrator | 2025-09-08 00:15:39.998623 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-08 00:15:41.528529 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:15:41.528717 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:15:41.528735 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:41.528749 | orchestrator | 2025-09-08 00:15:41.528761 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-08 00:15:42.225155 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:42.225998 | orchestrator | 2025-09-08 00:15:42.226084 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-08 00:15:42.277203 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:42.277273 | orchestrator | 2025-09-08 00:15:42.277290 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-08 00:15:42.389923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-08 00:15:42.389964 | orchestrator | 2025-09-08 00:15:42.389978 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-08 00:15:42.945207 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:42.945332 | orchestrator | 2025-09-08 00:15:42.945349 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-08 00:15:43.360335 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:43.360454 | orchestrator | 2025-09-08 00:15:43.360469 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-08 00:15:44.740693 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-08 00:15:44.740815 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-08 00:15:44.740830 | orchestrator | 2025-09-08 00:15:44.740843 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-08 00:15:45.441865 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:45.441977 | orchestrator | 2025-09-08 00:15:45.441992 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-08 00:15:45.870316 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:45.870438 | orchestrator | 2025-09-08 00:15:45.870453 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-08 00:15:46.261869 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:46.261977 | orchestrator | 2025-09-08 00:15:46.261992 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-08 00:15:46.309104 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:46.309176 | orchestrator | 2025-09-08 00:15:46.309189 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-08 00:15:46.374347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-08 00:15:46.374486 | orchestrator | 2025-09-08 00:15:46.374501 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-08 00:15:46.414911 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:46.414964 | orchestrator | 2025-09-08 00:15:46.414976 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-08 00:15:48.558775 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-08 00:15:48.558910 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-08 00:15:48.558926 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-08 00:15:48.558938 | orchestrator | 2025-09-08 00:15:48.558950 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-08 00:15:49.302900 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:49.303021 | orchestrator | 2025-09-08 00:15:49.303039 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-08 00:15:50.041148 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:50.041260 | orchestrator | 2025-09-08 00:15:50.041274 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-08 00:15:50.790416 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:50.790526 | orchestrator | 2025-09-08 00:15:50.790540 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-08 00:15:50.875729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-08 00:15:50.875835 | orchestrator | 2025-09-08 00:15:50.875849 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-08 00:15:50.925291 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:50.925324 | orchestrator | 2025-09-08 00:15:50.925335 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-08 00:15:51.670208 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-08 00:15:51.670316 | orchestrator | 2025-09-08 00:15:51.670329 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-08 00:15:51.756324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-08 00:15:51.756401 | orchestrator | 2025-09-08 00:15:51.756414 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-08 00:15:52.523216 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:52.523329 | orchestrator | 2025-09-08 00:15:52.523343 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-08 00:15:53.149214 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:53.149323 | orchestrator | 2025-09-08 00:15:53.149338 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-08 00:15:53.206325 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:15:53.206392 | orchestrator | 2025-09-08 00:15:53.206405 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-08 00:15:53.262910 | orchestrator | ok: [testbed-manager] 2025-09-08 00:15:53.262972 | orchestrator | 2025-09-08 00:15:53.262985 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-08 00:15:54.103620 | orchestrator | changed: [testbed-manager] 2025-09-08 00:15:54.103801 | orchestrator | 2025-09-08 00:15:54.103815 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-08 00:17:04.229955 | orchestrator | changed: [testbed-manager] 2025-09-08 00:17:04.230113 | orchestrator | 2025-09-08 00:17:04.230130 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-08 00:17:05.274307 | orchestrator | ok: [testbed-manager] 2025-09-08 00:17:05.274471 | orchestrator | 2025-09-08 00:17:05.274503 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-08 00:17:05.332520 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:17:05.332592 | orchestrator | 2025-09-08 00:17:05.332608 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-08 00:17:08.092038 | orchestrator | changed: [testbed-manager] 2025-09-08 00:17:08.092164 | orchestrator | 2025-09-08 00:17:08.092181 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-08 00:17:08.193024 | orchestrator | ok: [testbed-manager] 2025-09-08 00:17:08.193123 | orchestrator | 2025-09-08 00:17:08.193137 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-08 00:17:08.193150 | orchestrator | 2025-09-08 00:17:08.193162 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-08 00:17:08.272954 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:17:08.273040 | orchestrator | 2025-09-08 00:17:08.273054 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-08 00:18:08.327535 | orchestrator | Pausing for 60 seconds 2025-09-08 00:18:08.327708 | orchestrator | changed: [testbed-manager] 2025-09-08 00:18:08.327728 | orchestrator | 2025-09-08 00:18:08.327741 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-08 00:18:14.001571 | orchestrator | changed: [testbed-manager] 2025-09-08 00:18:14.001723 | orchestrator | 2025-09-08 00:18:14.001742 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-08 00:18:55.709410 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-08 00:18:55.709550 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-08 00:18:55.709569 | orchestrator | changed: [testbed-manager] 2025-09-08 00:18:55.709582 | orchestrator | 2025-09-08 00:18:55.709594 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-08 00:19:05.741393 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:05.741504 | orchestrator | 2025-09-08 00:19:05.741520 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-08 00:19:05.841993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-08 00:19:05.842117 | orchestrator | 2025-09-08 00:19:05.842131 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-08 00:19:05.842144 | orchestrator | 2025-09-08 00:19:05.842155 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-08 00:19:05.900799 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:05.900869 | orchestrator | 2025-09-08 00:19:05.900883 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:19:05.900896 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-08 00:19:05.900913 | orchestrator | 2025-09-08 00:19:06.036591 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-08 00:19:06.036721 | orchestrator | + deactivate 2025-09-08 00:19:06.036738 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-08 00:19:06.036752 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-08 00:19:06.036764 | orchestrator | + export PATH 2025-09-08 00:19:06.036775 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-08 00:19:06.036786 | orchestrator | + '[' -n '' ']' 2025-09-08 00:19:06.036797 | orchestrator | + hash -r 2025-09-08 00:19:06.036808 | orchestrator | + '[' -n '' ']' 2025-09-08 00:19:06.036819 | orchestrator | + unset VIRTUAL_ENV 2025-09-08 00:19:06.036830 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-08 00:19:06.036841 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-08 00:19:06.036852 | orchestrator | + unset -f deactivate 2025-09-08 00:19:06.036863 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-08 00:19:06.043202 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-08 00:19:06.043224 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-08 00:19:06.043235 | orchestrator | + local max_attempts=60 2025-09-08 00:19:06.043246 | orchestrator | + local name=ceph-ansible 2025-09-08 00:19:06.043257 | orchestrator | + local attempt_num=1 2025-09-08 00:19:06.044406 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:19:06.085112 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:19:06.085160 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-08 00:19:06.085173 | orchestrator | + local max_attempts=60 2025-09-08 00:19:06.085185 | orchestrator | + local name=kolla-ansible 2025-09-08 00:19:06.085228 | orchestrator | + local attempt_num=1 2025-09-08 00:19:06.085799 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-08 00:19:06.119684 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:19:06.119722 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-08 00:19:06.119734 | orchestrator | + local max_attempts=60 2025-09-08 00:19:06.119745 | orchestrator | + local name=osism-ansible 2025-09-08 00:19:06.119755 | orchestrator | + local attempt_num=1 2025-09-08 00:19:06.121131 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-08 00:19:06.170930 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:19:06.170969 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-08 00:19:06.170981 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-08 00:19:06.906677 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-08 00:19:07.149563 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-08 00:19:07.149644 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osisโ€ฆ" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149704 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osisโ€ฆ" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149716 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osismโ€ฆ" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-08 00:19:07.149729 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ruโ€ฆ" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-08 00:19:07.149740 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osismโ€ฆ" beat About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149751 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osismโ€ฆ" flower About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149762 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entrโ€ฆ" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-09-08 00:19:07.149773 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osismโ€ฆ" listener About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149783 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.sโ€ฆ" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-08 00:19:07.149794 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osismโ€ฆ" openstack About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149805 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.sโ€ฆ" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-08 00:19:07.149815 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osisโ€ฆ" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149826 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.sโ€ฆ" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-08 00:19:07.149837 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osisโ€ฆ" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.149937 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleepโ€ฆ" osismclient About a minute ago Up About a minute (healthy) 2025-09-08 00:19:07.157798 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-08 00:19:07.190201 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-08 00:19:07.190255 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-08 00:19:07.192936 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-08 00:19:19.444637 | orchestrator | 2025-09-08 00:19:19 | INFO  | Task f6fce024-d5cd-42e9-bbd4-cf077e1eaef6 (resolvconf) was prepared for execution. 2025-09-08 00:19:19.444797 | orchestrator | 2025-09-08 00:19:19 | INFO  | It takes a moment until task f6fce024-d5cd-42e9-bbd4-cf077e1eaef6 (resolvconf) has been started and output is visible here. 2025-09-08 00:19:33.416787 | orchestrator | 2025-09-08 00:19:33.416913 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-08 00:19:33.416930 | orchestrator | 2025-09-08 00:19:33.416942 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:19:33.416954 | orchestrator | Monday 08 September 2025 00:19:23 +0000 (0:00:00.155) 0:00:00.155 ****** 2025-09-08 00:19:33.416965 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:33.416977 | orchestrator | 2025-09-08 00:19:33.416989 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-08 00:19:33.417001 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:03.930) 0:00:04.086 ****** 2025-09-08 00:19:33.417012 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:33.417023 | orchestrator | 2025-09-08 00:19:33.417034 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-08 00:19:33.417044 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:00.066) 0:00:04.153 ****** 2025-09-08 00:19:33.417056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-08 00:19:33.417068 | orchestrator | 2025-09-08 00:19:33.417078 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-08 00:19:33.417089 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:00.087) 0:00:04.240 ****** 2025-09-08 00:19:33.417100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:19:33.417111 | orchestrator | 2025-09-08 00:19:33.417122 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-08 00:19:33.417132 | orchestrator | Monday 08 September 2025 00:19:27 +0000 (0:00:00.087) 0:00:04.327 ****** 2025-09-08 00:19:33.417143 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:33.417154 | orchestrator | 2025-09-08 00:19:33.417165 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-08 00:19:33.417176 | orchestrator | Monday 08 September 2025 00:19:28 +0000 (0:00:01.131) 0:00:05.458 ****** 2025-09-08 00:19:33.417186 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:33.417197 | orchestrator | 2025-09-08 00:19:33.417209 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-08 00:19:33.417220 | orchestrator | Monday 08 September 2025 00:19:28 +0000 (0:00:00.047) 0:00:05.506 ****** 2025-09-08 00:19:33.417230 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:33.417241 | orchestrator | 2025-09-08 00:19:33.417252 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-08 00:19:33.417262 | orchestrator | Monday 08 September 2025 00:19:29 +0000 (0:00:00.497) 0:00:06.003 ****** 2025-09-08 00:19:33.417276 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:33.417288 | orchestrator | 2025-09-08 00:19:33.417300 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-08 00:19:33.417339 | orchestrator | Monday 08 September 2025 00:19:29 +0000 (0:00:00.078) 0:00:06.082 ****** 2025-09-08 00:19:33.417352 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:33.417364 | orchestrator | 2025-09-08 00:19:33.417377 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-08 00:19:33.417390 | orchestrator | Monday 08 September 2025 00:19:29 +0000 (0:00:00.504) 0:00:06.586 ****** 2025-09-08 00:19:33.417403 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:33.417415 | orchestrator | 2025-09-08 00:19:33.417428 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-08 00:19:33.417440 | orchestrator | Monday 08 September 2025 00:19:30 +0000 (0:00:01.077) 0:00:07.663 ****** 2025-09-08 00:19:33.417453 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:33.417465 | orchestrator | 2025-09-08 00:19:33.417476 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-08 00:19:33.417498 | orchestrator | Monday 08 September 2025 00:19:31 +0000 (0:00:00.975) 0:00:08.639 ****** 2025-09-08 00:19:33.417509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-08 00:19:33.417520 | orchestrator | 2025-09-08 00:19:33.417531 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-08 00:19:33.417542 | orchestrator | Monday 08 September 2025 00:19:31 +0000 (0:00:00.078) 0:00:08.717 ****** 2025-09-08 00:19:33.417552 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:33.417563 | orchestrator | 2025-09-08 00:19:33.417573 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:19:33.417585 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:19:33.417596 | orchestrator | 2025-09-08 00:19:33.417606 | orchestrator | 2025-09-08 00:19:33.417617 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:19:33.417627 | orchestrator | Monday 08 September 2025 00:19:33 +0000 (0:00:01.194) 0:00:09.911 ****** 2025-09-08 00:19:33.417638 | orchestrator | =============================================================================== 2025-09-08 00:19:33.417668 | orchestrator | Gathering Facts --------------------------------------------------------- 3.93s 2025-09-08 00:19:33.417680 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2025-09-08 00:19:33.417691 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2025-09-08 00:19:33.417701 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-09-08 00:19:33.417712 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2025-09-08 00:19:33.417723 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-09-08 00:19:33.417753 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-09-08 00:19:33.417765 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-09-08 00:19:33.417775 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-09-08 00:19:33.417786 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-08 00:19:33.417796 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-09-08 00:19:33.417807 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-08 00:19:33.417818 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-09-08 00:19:33.697366 | orchestrator | + osism apply sshconfig 2025-09-08 00:19:45.694878 | orchestrator | 2025-09-08 00:19:45 | INFO  | Task 2a6d1aa5-a1ea-4d5a-8e4f-227cba6adb84 (sshconfig) was prepared for execution. 2025-09-08 00:19:45.695006 | orchestrator | 2025-09-08 00:19:45 | INFO  | It takes a moment until task 2a6d1aa5-a1ea-4d5a-8e4f-227cba6adb84 (sshconfig) has been started and output is visible here. 2025-09-08 00:19:57.688337 | orchestrator | 2025-09-08 00:19:57.688459 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-08 00:19:57.688476 | orchestrator | 2025-09-08 00:19:57.688489 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-08 00:19:57.688500 | orchestrator | Monday 08 September 2025 00:19:49 +0000 (0:00:00.164) 0:00:00.164 ****** 2025-09-08 00:19:57.688511 | orchestrator | ok: [testbed-manager] 2025-09-08 00:19:57.688522 | orchestrator | 2025-09-08 00:19:57.688534 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-08 00:19:57.688544 | orchestrator | Monday 08 September 2025 00:19:50 +0000 (0:00:00.644) 0:00:00.808 ****** 2025-09-08 00:19:57.688555 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:57.688567 | orchestrator | 2025-09-08 00:19:57.688577 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-08 00:19:57.688588 | orchestrator | Monday 08 September 2025 00:19:50 +0000 (0:00:00.533) 0:00:01.342 ****** 2025-09-08 00:19:57.688599 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-08 00:19:57.688610 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-08 00:19:57.688621 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-08 00:19:57.688632 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-08 00:19:57.688643 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-08 00:19:57.688710 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-08 00:19:57.688722 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-08 00:19:57.688733 | orchestrator | 2025-09-08 00:19:57.688743 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-08 00:19:57.688754 | orchestrator | Monday 08 September 2025 00:19:56 +0000 (0:00:05.899) 0:00:07.242 ****** 2025-09-08 00:19:57.688788 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:19:57.688799 | orchestrator | 2025-09-08 00:19:57.688810 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-08 00:19:57.688821 | orchestrator | Monday 08 September 2025 00:19:56 +0000 (0:00:00.064) 0:00:07.306 ****** 2025-09-08 00:19:57.688832 | orchestrator | changed: [testbed-manager] 2025-09-08 00:19:57.688842 | orchestrator | 2025-09-08 00:19:57.688853 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:19:57.688865 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:19:57.688879 | orchestrator | 2025-09-08 00:19:57.688893 | orchestrator | 2025-09-08 00:19:57.688906 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:19:57.688918 | orchestrator | Monday 08 September 2025 00:19:57 +0000 (0:00:00.622) 0:00:07.928 ****** 2025-09-08 00:19:57.688931 | orchestrator | =============================================================================== 2025-09-08 00:19:57.688944 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.90s 2025-09-08 00:19:57.688957 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.64s 2025-09-08 00:19:57.688970 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.62s 2025-09-08 00:19:57.688982 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-09-08 00:19:57.688994 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-08 00:19:57.981823 | orchestrator | + osism apply known-hosts 2025-09-08 00:20:09.968701 | orchestrator | 2025-09-08 00:20:09 | INFO  | Task 06d8272e-7aad-4420-a5ca-c25f4236062a (known-hosts) was prepared for execution. 2025-09-08 00:20:09.968804 | orchestrator | 2025-09-08 00:20:09 | INFO  | It takes a moment until task 06d8272e-7aad-4420-a5ca-c25f4236062a (known-hosts) has been started and output is visible here. 2025-09-08 00:20:26.937157 | orchestrator | 2025-09-08 00:20:26.937278 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-08 00:20:26.937295 | orchestrator | 2025-09-08 00:20:26.937307 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-08 00:20:26.937319 | orchestrator | Monday 08 September 2025 00:20:13 +0000 (0:00:00.166) 0:00:00.166 ****** 2025-09-08 00:20:26.937330 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-08 00:20:26.937342 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-08 00:20:26.937353 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-08 00:20:26.937364 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-08 00:20:26.937375 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-08 00:20:26.937385 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-08 00:20:26.937396 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-08 00:20:26.937407 | orchestrator | 2025-09-08 00:20:26.937418 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-08 00:20:26.937430 | orchestrator | Monday 08 September 2025 00:20:19 +0000 (0:00:06.043) 0:00:06.209 ****** 2025-09-08 00:20:26.937442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-08 00:20:26.937455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-08 00:20:26.937466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-08 00:20:26.937477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-08 00:20:26.937488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-08 00:20:26.937498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-08 00:20:26.937509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-08 00:20:26.937520 | orchestrator | 2025-09-08 00:20:26.937531 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.937542 | orchestrator | Monday 08 September 2025 00:20:20 +0000 (0:00:00.173) 0:00:06.383 ****** 2025-09-08 00:20:26.937553 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAVonDSWKfv3qCUzCr7EV83gtcQiNZplHrPq1RG7+dSA) 2025-09-08 00:20:26.937625 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2VZu5OdVVl867Wg0MP4r7qlyvxxqG7xtD8VsRQC86flkdAge7wUHxqUxlwvJGRqyCO8HG+adsAYOeqy//bAQtaPpYFfQ5oWFUMf5jhWwyYoQN7lmbwt1xNn3vFywDU3vKMVngPgAkYKyCZDFHTaEcsUwbHmR7xtgPcydCzd/Nf+A8aN3rygRNNbq1FXYQCZi7+dg/BdUM0TzyYQHMMzhe+6bUggaW4Ol1/77i3/h8O1ejJR1XKLj5RBUqfsS8CidCygu4WeDbyVhaWlSpZkEio0VbowviZekxV3C49LUl6y1HE8s9DJU1M9w3xWwAldvP7S2AHdKAytA2fRDzTg2nUVadq/nl/aURvPbLuhZu3SlNt9ghRA4kn8L3Z8zlAtTzMbThCSM0adLutkMIn9d/9Wzkr6ShAXp6IjnXNXy4/ZLSkd5eH5GIkbUFqVjGIRKD4RjGx99tHuFxc6TXKi+M30zYTfwumJB6G9Tmimxtbs+zhOAuEQiGui5A6c97lFk=) 2025-09-08 00:20:26.937642 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGMgewf4urJYKQovBnNSPJFxFafGLaEKArNu8si3+b0rsXyWmF810/vxhevM6F8Qd7dUQq2rpGrwsEr2XKB6YqU=) 2025-09-08 00:20:26.937713 | orchestrator | 2025-09-08 00:20:26.937727 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.937741 | orchestrator | Monday 08 September 2025 00:20:21 +0000 (0:00:01.251) 0:00:07.635 ****** 2025-09-08 00:20:26.937753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBqktKGSO9RxX8YmXvISyOtuz7F5mX0OEPoql+XSbz7ybs54uptQgiyRfS2eor9JYJkrl61OVk+T7JNVxQvcTY4=) 2025-09-08 00:20:26.937792 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRRYsrXJ2RwTZ0sj5bmDzNQEH9/i8OWbbszWuA3SSGmSVlE/KnnTMxXReNCfP+0eHw0xhUqS6UvkjJ7BVi3ne8qun4sHwgkrJXpAQ70IdXZBlzdh5ieVxUHOD8+UWSWoUYVpxW/Jg82kFtzk5iAAm0GtVOrvUFaR1z9lDahMZmRwlpINXE42gWHJJDJ2dblWI4/VoJbv2nrQm9phtzPkKlqJnAzEd5RbXeTcg0UHX3aDkNabH4aGOXTVTpVJoX7PYPjgBnW16FCeZ8aaHNfDUtnAsxEtLvlnT12tCvBy0T26MihY/SZCDsszMjODIlPM9vQ2bYd0TXvLzLKngVdneqdLckQVMUaa+cQOv7YRoIych0jIsLVkvLU7kRxsUBi4XZti2vtAHWWBQy2qQHeuwCAQ7Ryrw3c1rHHaSJVVxLz2ZVEkKmJRztJ0PGYeTKsXTdBHxsuYf88D5sJ1cHPTdhHSAoYWMB3lMDkM/SRmcuG3vaUevN/Vx6RDQDvqsg7KE=) 2025-09-08 00:20:26.937805 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAE8sHTn+iou+Q3+j2H1+FQfemtdLa2NPCQkEjI3dk8) 2025-09-08 00:20:26.937816 | orchestrator | 2025-09-08 00:20:26.937827 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.937838 | orchestrator | Monday 08 September 2025 00:20:22 +0000 (0:00:01.113) 0:00:08.749 ****** 2025-09-08 00:20:26.937849 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEpmzgWMEbq7kOWTKL/5rMxQ+TC+uC8OTVoTvObVpEXn4kjaFz7zwnRWHYGnJ2oOGB9SCYpPFX056ERMIvrDmGE=) 2025-09-08 00:20:26.937860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCchjpNLMSXQfZtrpDSGFBaunoWgol7SsSHkFjf4dM8N3nOacEqamCTpHHr8dh78r4WGm72pFGC6HTgxA/jyG/dBB3za1nzu3Kz6aF9iX0Z5ioe+jak4oVIZC+AYtLhtOgF+B3q2wFLRnrhT4yclMXtQkHapDUrcUTfIeZuJzkmn5r18ye2nFKpos194qYi9+NgnMYPMpPiRMckdAv64aYXm/Wvilw/Kg6Fsqx8wjLHwos3ny8AvHSmKeo/Yf9lZPSbgkawGZSV5wri18mDtSYC0m2bKsF3h59ebg+7s5nH6Jhl3RcNIpdWxnZOHSy2gNOciudFMVgt1NXWDxgK3t48Z8SFFi881s5JkNCexxL34Sf+FT0Nfxu2wnfUl3WPxKcoSgcYr7HrKrcvPZeAr3uNPVd+8ntCEE+/o1uUf6QBLwjLFw2odv4RDUBemkBwH8pt1tPSfaoCAmb78LY+851IliLppgQwmVSQPvJUnYDIBmId1Nu5W4eblaDejE+duc8=) 2025-09-08 00:20:26.937872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF68VDF6kcL2Ay7xwJfcpmM1z+sb6hMwcbknG2gUfupu) 2025-09-08 00:20:26.937883 | orchestrator | 2025-09-08 00:20:26.937894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.937906 | orchestrator | Monday 08 September 2025 00:20:23 +0000 (0:00:01.089) 0:00:09.838 ****** 2025-09-08 00:20:26.937917 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZz8EkqOZM6Wd5n38TzMkf73W0vchy+GlkSrUjnvZAHiBGS0NQlom9G4fmyAqAHdooPEeeeGvHsH9BTo2Hmq2U2ioDirLfy6EsrMN/YkRFYzvznAd/iGjmnsB5rHympkC/IGBQAOAqLV6rFSSkHoRoJcWoOf6XoJ49i9TTeaGlsKrKSC7EigQdkCEdegrbhJuZ/7cKxqHtix/5OYO6wKfmmmfMFecoctUIzK2eibYolc0061mPoHfnkDskNGMai7GWzdpv0XS7Yq7ZNjhA4YREsPkGoU91zx59eJ74pGmAPhxgdkK3VgEdDeoxtiUKh0ZbtfbLh7gw43mAZAlO6pp9Xm5EZB7uZrMtyrSPTzOHb9rrLrKmJ7CBnUxt6aoZJCqHDlOPLt/+/v2Rx7v6np1iVLRDI/JxxpvtW3LhTpIx3nHGJ12SPFgtWNz8+VLZCiPD55ZawutXixzUZ7nW/8+vsSIxaveB7KyKrj2rjCp8fowJi0jxWM8DUAHXTgHRk9c=) 2025-09-08 00:20:26.937928 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCC6fsqQUXeSI9d7CiAGgXMq3ZkwoC4esXEyytIjbbxYitQ0NT+ZOY0QTtNSP/rYYZm7KMhxAK85luJkga1wR/0=) 2025-09-08 00:20:26.937939 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBfVxnj4MAenQSANJtVislavPpRr9wB8l9DHm+aySNZ1) 2025-09-08 00:20:26.937957 | orchestrator | 2025-09-08 00:20:26.937968 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.937979 | orchestrator | Monday 08 September 2025 00:20:24 +0000 (0:00:01.102) 0:00:10.941 ****** 2025-09-08 00:20:26.937990 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqc4CeIm+DoJtyo7tLTLdKq/LQ91SCVo1FNv05Od0HILVZ5tZdLsz9QJWJSdmOF4RVo2dMF5D44ZhVm/DI3O7ytoio2MoAqXyo1DtXtYxjorpp8gnt5rAf3M1goSIqoI9tqmCeXcdWQjjH+kJBXxPzU+MMLJE6vbvsHdft3ktDyVUjppBveNJ5u3uOMxgZNBgXo3zFzG6ainAFhL1+hNJqjt1Qq+4NyeZVspUx0IrgcE0tgrOaOqmF6Sf4G2N5J49UtGu57LNs3/u/aPprJ0WbP/kI/iRRni91u1Zsi4S07o+08DKcotU5ssvB4kqujFiHQ4t5fK7cy6vhuC4c3G7HFIYAwkIIDrjmqhSUcoNg5iuBWkxLBKgXyPlRXUripW27HqY/N9moXvr/a7wrSCHpn9Aysr75+dmlYTgeeJgHKEYYHgvgeDQxZGePwvc0cyPxwu4ure3fNO8NpSTLmF8ODIn+vddHNLj+cLLlDqnXoCL4ewIp2Vw7dv524k6MwI8=) 2025-09-08 00:20:26.938011 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEL9CacLzQIeTS60D7p/HhhXM20F75jVTM4FnZMA1O+WOupJy2RDnR5kXJ27kB2i+jyotAUkaH5O67xtILu67sc=) 2025-09-08 00:20:26.938102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFNgKKY20wEQ7WJtbU0os2Y6dUTX9p9b7zU+aXbXydqN) 2025-09-08 00:20:26.938113 | orchestrator | 2025-09-08 00:20:26.938124 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:26.938135 | orchestrator | Monday 08 September 2025 00:20:25 +0000 (0:00:01.090) 0:00:12.032 ****** 2025-09-08 00:20:26.938153 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHl7rBSfRjgAOxGEJI73aGZFCpt/IuBR34OoHX14O4twHEaXc9U1a7EtyKH2U088UZUyGVvn4Jewe2NRR4NUMsk=) 2025-09-08 00:20:39.276856 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRzD6foUfutf5BIx6VU//9UOBDCun9JRIAyYrjl873cRnk6pC9fMCINLJDlm4JUfH8fJ8vC1xG1Kit69LFxHq76Dulg7UCAQLtZhbgcZeZhZ+dwOET3QHYPHYsFviN3WuYxpBNQa+V1/xKNiuPpzlzvsJ0NWIfafJuwRKkeccDIUhpOeQYMkPmlPm594P72yLGuG5TFeH78Y8svZZHR4mjyv9Qmy46De+pXeSm1/XeXjvdI95DKpjcnfrHWj9bKkuFvImpypMlADE9SItycXuurprYsvpT1Aj11w4c8jvyp6rEnMqBuStR/8j1Td0K3A7s5P5GmMWAU31ouo8KxP2h7bE9qXAfFPZ9TI14kFg1IJQsMDDPyMoJe26rv2XkggnjajhKoaxXp3UIRwJvGdKcrZjrnuOKf65sOhvnFqiB9TFBtEYQtEks/acRPiNob3B8XHwLHWVkJ63Oq+xof+2tm0Of47nQwwUM8de6wkXIKfr05GK0c3jS46mHrr9E23U=) 2025-09-08 00:20:39.276987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/c/b8X0iUDgzdeQ5eU0/FU+33QA4ijpL2pzypmV3Uz) 2025-09-08 00:20:39.277006 | orchestrator | 2025-09-08 00:20:39.277019 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:39.277032 | orchestrator | Monday 08 September 2025 00:20:26 +0000 (0:00:01.106) 0:00:13.139 ****** 2025-09-08 00:20:39.277044 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiMRhkWFKCBwXdPRYC3gdhvyMQ78/FkIt1Nr2sVVdjL0OF4IuikpnWIoOxkG1MQYViRtyhKo00663eSfD8hNAJpIsYPNCGXkXGAHpkHT1ib429w23gXzjQqmhizDubxjLidPx36+IKhkZQYfLbp208IKRECZ5k9BalHtXm5hJGApQNt7anvBwywJOxs+un8gggOJKznuXhJPGpYtpjvMqaWQ4JtlG6h+tM05NSsNRRra1C1m1qMvaV5AQ9tW3tr2eZCq6eCfqzu05tWoWeGgychTZCgGAkRwUybCizlxJs9O6zufYm9GvGYE5jV8wP2DaYFP9SEJWEFBvuyB7v6eX0v6uULflKhyGYUEo1DENGGhxO2S+5Ex1s+gztjLdspIV0G6jyppWcoJGpN+wxGdjCuqEFl9i3K7ta0NHj5HaVZZfmaRtF2Ak4QbsO/3PFc5aThkgkRY+6PZAuUbfLqunmzOWSqKRcEIeHBw8Bvzjr3FJ4WLtGgIdvCjI0YtdZhOs=) 2025-09-08 00:20:39.277056 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo6IBpVvnIudBI6sb9RshJVYC/MFQ0YMzmosqeJtN95) 2025-09-08 00:20:39.277067 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKcOtgGvDTlV2zhlFnyLsTDQH/axjx6ukSn9TjTj6bRY2Kdqpfd9IYnpAliePk/0YhnggOGngYsZYX79nQ8p2t4=) 2025-09-08 00:20:39.277080 | orchestrator | 2025-09-08 00:20:39.277091 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-08 00:20:39.277130 | orchestrator | Monday 08 September 2025 00:20:28 +0000 (0:00:01.094) 0:00:14.233 ****** 2025-09-08 00:20:39.277143 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-08 00:20:39.277154 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-08 00:20:39.277165 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-08 00:20:39.277175 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-08 00:20:39.277186 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-08 00:20:39.277197 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-08 00:20:39.277208 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-08 00:20:39.277219 | orchestrator | 2025-09-08 00:20:39.277230 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-08 00:20:39.277242 | orchestrator | Monday 08 September 2025 00:20:33 +0000 (0:00:05.522) 0:00:19.756 ****** 2025-09-08 00:20:39.277254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-08 00:20:39.277267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-08 00:20:39.277278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-08 00:20:39.277288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-08 00:20:39.277299 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-08 00:20:39.277310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-08 00:20:39.277320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-08 00:20:39.277331 | orchestrator | 2025-09-08 00:20:39.277358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:39.277370 | orchestrator | Monday 08 September 2025 00:20:33 +0000 (0:00:00.163) 0:00:19.919 ****** 2025-09-08 00:20:39.277383 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAVonDSWKfv3qCUzCr7EV83gtcQiNZplHrPq1RG7+dSA) 2025-09-08 00:20:39.277421 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2VZu5OdVVl867Wg0MP4r7qlyvxxqG7xtD8VsRQC86flkdAge7wUHxqUxlwvJGRqyCO8HG+adsAYOeqy//bAQtaPpYFfQ5oWFUMf5jhWwyYoQN7lmbwt1xNn3vFywDU3vKMVngPgAkYKyCZDFHTaEcsUwbHmR7xtgPcydCzd/Nf+A8aN3rygRNNbq1FXYQCZi7+dg/BdUM0TzyYQHMMzhe+6bUggaW4Ol1/77i3/h8O1ejJR1XKLj5RBUqfsS8CidCygu4WeDbyVhaWlSpZkEio0VbowviZekxV3C49LUl6y1HE8s9DJU1M9w3xWwAldvP7S2AHdKAytA2fRDzTg2nUVadq/nl/aURvPbLuhZu3SlNt9ghRA4kn8L3Z8zlAtTzMbThCSM0adLutkMIn9d/9Wzkr6ShAXp6IjnXNXy4/ZLSkd5eH5GIkbUFqVjGIRKD4RjGx99tHuFxc6TXKi+M30zYTfwumJB6G9Tmimxtbs+zhOAuEQiGui5A6c97lFk=) 2025-09-08 00:20:39.277436 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGMgewf4urJYKQovBnNSPJFxFafGLaEKArNu8si3+b0rsXyWmF810/vxhevM6F8Qd7dUQq2rpGrwsEr2XKB6YqU=) 2025-09-08 00:20:39.277448 | orchestrator | 2025-09-08 00:20:39.277461 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:39.277474 | orchestrator | Monday 08 September 2025 00:20:35 +0000 (0:00:02.193) 0:00:22.113 ****** 2025-09-08 00:20:39.277495 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBqktKGSO9RxX8YmXvISyOtuz7F5mX0OEPoql+XSbz7ybs54uptQgiyRfS2eor9JYJkrl61OVk+T7JNVxQvcTY4=) 2025-09-08 00:20:39.277508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRRYsrXJ2RwTZ0sj5bmDzNQEH9/i8OWbbszWuA3SSGmSVlE/KnnTMxXReNCfP+0eHw0xhUqS6UvkjJ7BVi3ne8qun4sHwgkrJXpAQ70IdXZBlzdh5ieVxUHOD8+UWSWoUYVpxW/Jg82kFtzk5iAAm0GtVOrvUFaR1z9lDahMZmRwlpINXE42gWHJJDJ2dblWI4/VoJbv2nrQm9phtzPkKlqJnAzEd5RbXeTcg0UHX3aDkNabH4aGOXTVTpVJoX7PYPjgBnW16FCeZ8aaHNfDUtnAsxEtLvlnT12tCvBy0T26MihY/SZCDsszMjODIlPM9vQ2bYd0TXvLzLKngVdneqdLckQVMUaa+cQOv7YRoIych0jIsLVkvLU7kRxsUBi4XZti2vtAHWWBQy2qQHeuwCAQ7Ryrw3c1rHHaSJVVxLz2ZVEkKmJRztJ0PGYeTKsXTdBHxsuYf88D5sJ1cHPTdhHSAoYWMB3lMDkM/SRmcuG3vaUevN/Vx6RDQDvqsg7KE=) 2025-09-08 00:20:39.277522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILAE8sHTn+iou+Q3+j2H1+FQfemtdLa2NPCQkEjI3dk8) 2025-09-08 00:20:39.277534 | orchestrator | 2025-09-08 00:20:39.277546 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:39.277558 | orchestrator | Monday 08 September 2025 00:20:37 +0000 (0:00:01.121) 0:00:23.234 ****** 2025-09-08 00:20:39.277572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCchjpNLMSXQfZtrpDSGFBaunoWgol7SsSHkFjf4dM8N3nOacEqamCTpHHr8dh78r4WGm72pFGC6HTgxA/jyG/dBB3za1nzu3Kz6aF9iX0Z5ioe+jak4oVIZC+AYtLhtOgF+B3q2wFLRnrhT4yclMXtQkHapDUrcUTfIeZuJzkmn5r18ye2nFKpos194qYi9+NgnMYPMpPiRMckdAv64aYXm/Wvilw/Kg6Fsqx8wjLHwos3ny8AvHSmKeo/Yf9lZPSbgkawGZSV5wri18mDtSYC0m2bKsF3h59ebg+7s5nH6Jhl3RcNIpdWxnZOHSy2gNOciudFMVgt1NXWDxgK3t48Z8SFFi881s5JkNCexxL34Sf+FT0Nfxu2wnfUl3WPxKcoSgcYr7HrKrcvPZeAr3uNPVd+8ntCEE+/o1uUf6QBLwjLFw2odv4RDUBemkBwH8pt1tPSfaoCAmb78LY+851IliLppgQwmVSQPvJUnYDIBmId1Nu5W4eblaDejE+duc8=) 2025-09-08 00:20:39.277585 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEpmzgWMEbq7kOWTKL/5rMxQ+TC+uC8OTVoTvObVpEXn4kjaFz7zwnRWHYGnJ2oOGB9SCYpPFX056ERMIvrDmGE=) 2025-09-08 00:20:39.277597 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF68VDF6kcL2Ay7xwJfcpmM1z+sb6hMwcbknG2gUfupu) 2025-09-08 00:20:39.277610 | orchestrator | 2025-09-08 00:20:39.277622 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:39.277635 | orchestrator | Monday 08 September 2025 00:20:38 +0000 (0:00:01.109) 0:00:24.343 ****** 2025-09-08 00:20:39.277684 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZz8EkqOZM6Wd5n38TzMkf73W0vchy+GlkSrUjnvZAHiBGS0NQlom9G4fmyAqAHdooPEeeeGvHsH9BTo2Hmq2U2ioDirLfy6EsrMN/YkRFYzvznAd/iGjmnsB5rHympkC/IGBQAOAqLV6rFSSkHoRoJcWoOf6XoJ49i9TTeaGlsKrKSC7EigQdkCEdegrbhJuZ/7cKxqHtix/5OYO6wKfmmmfMFecoctUIzK2eibYolc0061mPoHfnkDskNGMai7GWzdpv0XS7Yq7ZNjhA4YREsPkGoU91zx59eJ74pGmAPhxgdkK3VgEdDeoxtiUKh0ZbtfbLh7gw43mAZAlO6pp9Xm5EZB7uZrMtyrSPTzOHb9rrLrKmJ7CBnUxt6aoZJCqHDlOPLt/+/v2Rx7v6np1iVLRDI/JxxpvtW3LhTpIx3nHGJ12SPFgtWNz8+VLZCiPD55ZawutXixzUZ7nW/8+vsSIxaveB7KyKrj2rjCp8fowJi0jxWM8DUAHXTgHRk9c=) 2025-09-08 00:20:43.743490 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCC6fsqQUXeSI9d7CiAGgXMq3ZkwoC4esXEyytIjbbxYitQ0NT+ZOY0QTtNSP/rYYZm7KMhxAK85luJkga1wR/0=) 2025-09-08 00:20:43.743608 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBfVxnj4MAenQSANJtVislavPpRr9wB8l9DHm+aySNZ1) 2025-09-08 00:20:43.743625 | orchestrator | 2025-09-08 00:20:43.743637 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:43.743699 | orchestrator | Monday 08 September 2025 00:20:39 +0000 (0:00:01.132) 0:00:25.476 ****** 2025-09-08 00:20:43.743714 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqc4CeIm+DoJtyo7tLTLdKq/LQ91SCVo1FNv05Od0HILVZ5tZdLsz9QJWJSdmOF4RVo2dMF5D44ZhVm/DI3O7ytoio2MoAqXyo1DtXtYxjorpp8gnt5rAf3M1goSIqoI9tqmCeXcdWQjjH+kJBXxPzU+MMLJE6vbvsHdft3ktDyVUjppBveNJ5u3uOMxgZNBgXo3zFzG6ainAFhL1+hNJqjt1Qq+4NyeZVspUx0IrgcE0tgrOaOqmF6Sf4G2N5J49UtGu57LNs3/u/aPprJ0WbP/kI/iRRni91u1Zsi4S07o+08DKcotU5ssvB4kqujFiHQ4t5fK7cy6vhuC4c3G7HFIYAwkIIDrjmqhSUcoNg5iuBWkxLBKgXyPlRXUripW27HqY/N9moXvr/a7wrSCHpn9Aysr75+dmlYTgeeJgHKEYYHgvgeDQxZGePwvc0cyPxwu4ure3fNO8NpSTLmF8ODIn+vddHNLj+cLLlDqnXoCL4ewIp2Vw7dv524k6MwI8=) 2025-09-08 00:20:43.743753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEL9CacLzQIeTS60D7p/HhhXM20F75jVTM4FnZMA1O+WOupJy2RDnR5kXJ27kB2i+jyotAUkaH5O67xtILu67sc=) 2025-09-08 00:20:43.743765 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFNgKKY20wEQ7WJtbU0os2Y6dUTX9p9b7zU+aXbXydqN) 2025-09-08 00:20:43.743776 | orchestrator | 2025-09-08 00:20:43.743787 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:43.743798 | orchestrator | Monday 08 September 2025 00:20:40 +0000 (0:00:01.133) 0:00:26.609 ****** 2025-09-08 00:20:43.743808 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/c/b8X0iUDgzdeQ5eU0/FU+33QA4ijpL2pzypmV3Uz) 2025-09-08 00:20:43.743820 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRzD6foUfutf5BIx6VU//9UOBDCun9JRIAyYrjl873cRnk6pC9fMCINLJDlm4JUfH8fJ8vC1xG1Kit69LFxHq76Dulg7UCAQLtZhbgcZeZhZ+dwOET3QHYPHYsFviN3WuYxpBNQa+V1/xKNiuPpzlzvsJ0NWIfafJuwRKkeccDIUhpOeQYMkPmlPm594P72yLGuG5TFeH78Y8svZZHR4mjyv9Qmy46De+pXeSm1/XeXjvdI95DKpjcnfrHWj9bKkuFvImpypMlADE9SItycXuurprYsvpT1Aj11w4c8jvyp6rEnMqBuStR/8j1Td0K3A7s5P5GmMWAU31ouo8KxP2h7bE9qXAfFPZ9TI14kFg1IJQsMDDPyMoJe26rv2XkggnjajhKoaxXp3UIRwJvGdKcrZjrnuOKf65sOhvnFqiB9TFBtEYQtEks/acRPiNob3B8XHwLHWVkJ63Oq+xof+2tm0Of47nQwwUM8de6wkXIKfr05GK0c3jS46mHrr9E23U=) 2025-09-08 00:20:43.743849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHl7rBSfRjgAOxGEJI73aGZFCpt/IuBR34OoHX14O4twHEaXc9U1a7EtyKH2U088UZUyGVvn4Jewe2NRR4NUMsk=) 2025-09-08 00:20:43.743861 | orchestrator | 2025-09-08 00:20:43.743872 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-08 00:20:43.743882 | orchestrator | Monday 08 September 2025 00:20:41 +0000 (0:00:01.127) 0:00:27.736 ****** 2025-09-08 00:20:43.743893 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEo6IBpVvnIudBI6sb9RshJVYC/MFQ0YMzmosqeJtN95) 2025-09-08 00:20:43.743903 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiMRhkWFKCBwXdPRYC3gdhvyMQ78/FkIt1Nr2sVVdjL0OF4IuikpnWIoOxkG1MQYViRtyhKo00663eSfD8hNAJpIsYPNCGXkXGAHpkHT1ib429w23gXzjQqmhizDubxjLidPx36+IKhkZQYfLbp208IKRECZ5k9BalHtXm5hJGApQNt7anvBwywJOxs+un8gggOJKznuXhJPGpYtpjvMqaWQ4JtlG6h+tM05NSsNRRra1C1m1qMvaV5AQ9tW3tr2eZCq6eCfqzu05tWoWeGgychTZCgGAkRwUybCizlxJs9O6zufYm9GvGYE5jV8wP2DaYFP9SEJWEFBvuyB7v6eX0v6uULflKhyGYUEo1DENGGhxO2S+5Ex1s+gztjLdspIV0G6jyppWcoJGpN+wxGdjCuqEFl9i3K7ta0NHj5HaVZZfmaRtF2Ak4QbsO/3PFc5aThkgkRY+6PZAuUbfLqunmzOWSqKRcEIeHBw8Bvzjr3FJ4WLtGgIdvCjI0YtdZhOs=) 2025-09-08 00:20:43.743915 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKcOtgGvDTlV2zhlFnyLsTDQH/axjx6ukSn9TjTj6bRY2Kdqpfd9IYnpAliePk/0YhnggOGngYsZYX79nQ8p2t4=) 2025-09-08 00:20:43.743926 | orchestrator | 2025-09-08 00:20:43.743937 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-08 00:20:43.743948 | orchestrator | Monday 08 September 2025 00:20:42 +0000 (0:00:01.120) 0:00:28.857 ****** 2025-09-08 00:20:43.743959 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-08 00:20:43.743970 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-08 00:20:43.743999 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-08 00:20:43.744018 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-08 00:20:43.744032 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-08 00:20:43.744045 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-08 00:20:43.744057 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-08 00:20:43.744070 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:20:43.744083 | orchestrator | 2025-09-08 00:20:43.744096 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-08 00:20:43.744108 | orchestrator | Monday 08 September 2025 00:20:42 +0000 (0:00:00.176) 0:00:29.034 ****** 2025-09-08 00:20:43.744121 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:20:43.744134 | orchestrator | 2025-09-08 00:20:43.744147 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-08 00:20:43.744164 | orchestrator | Monday 08 September 2025 00:20:42 +0000 (0:00:00.075) 0:00:29.110 ****** 2025-09-08 00:20:43.744177 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:20:43.744191 | orchestrator | 2025-09-08 00:20:43.744203 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-08 00:20:43.744215 | orchestrator | Monday 08 September 2025 00:20:42 +0000 (0:00:00.060) 0:00:29.170 ****** 2025-09-08 00:20:43.744227 | orchestrator | changed: [testbed-manager] 2025-09-08 00:20:43.744240 | orchestrator | 2025-09-08 00:20:43.744253 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:20:43.744266 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:20:43.744280 | orchestrator | 2025-09-08 00:20:43.744293 | orchestrator | 2025-09-08 00:20:43.744305 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:20:43.744317 | orchestrator | Monday 08 September 2025 00:20:43 +0000 (0:00:00.477) 0:00:29.648 ****** 2025-09-08 00:20:43.744329 | orchestrator | =============================================================================== 2025-09-08 00:20:43.744342 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.04s 2025-09-08 00:20:43.744355 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.52s 2025-09-08 00:20:43.744368 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.19s 2025-09-08 00:20:43.744381 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.25s 2025-09-08 00:20:43.744395 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-08 00:20:43.744413 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-08 00:20:43.744432 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-09-08 00:20:43.744450 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-08 00:20:43.744467 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-08 00:20:43.744484 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-08 00:20:43.744501 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-08 00:20:43.744518 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-08 00:20:43.744535 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-08 00:20:43.744553 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-08 00:20:43.744571 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-08 00:20:43.744588 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-08 00:20:43.744608 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-09-08 00:20:43.744627 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-09-08 00:20:43.744693 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-09-08 00:20:43.744707 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-08 00:20:44.012173 | orchestrator | + osism apply squid 2025-09-08 00:20:55.938170 | orchestrator | 2025-09-08 00:20:55 | INFO  | Task 64657def-d698-4bf6-8e20-e3171b9e300e (squid) was prepared for execution. 2025-09-08 00:20:55.938295 | orchestrator | 2025-09-08 00:20:55 | INFO  | It takes a moment until task 64657def-d698-4bf6-8e20-e3171b9e300e (squid) has been started and output is visible here. 2025-09-08 00:22:51.897185 | orchestrator | 2025-09-08 00:22:51.897314 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-08 00:22:51.897331 | orchestrator | 2025-09-08 00:22:51.897343 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-08 00:22:51.897354 | orchestrator | Monday 08 September 2025 00:20:59 +0000 (0:00:00.167) 0:00:00.167 ****** 2025-09-08 00:22:51.897366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:22:51.897378 | orchestrator | 2025-09-08 00:22:51.897390 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-08 00:22:51.897401 | orchestrator | Monday 08 September 2025 00:20:59 +0000 (0:00:00.087) 0:00:00.255 ****** 2025-09-08 00:22:51.897412 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.897424 | orchestrator | 2025-09-08 00:22:51.897435 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-08 00:22:51.897446 | orchestrator | Monday 08 September 2025 00:21:01 +0000 (0:00:01.463) 0:00:01.718 ****** 2025-09-08 00:22:51.897457 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-08 00:22:51.897469 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-08 00:22:51.897479 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-08 00:22:51.897491 | orchestrator | 2025-09-08 00:22:51.897501 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-08 00:22:51.897512 | orchestrator | Monday 08 September 2025 00:21:02 +0000 (0:00:01.192) 0:00:02.911 ****** 2025-09-08 00:22:51.897523 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-08 00:22:51.897534 | orchestrator | 2025-09-08 00:22:51.897545 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-08 00:22:51.897556 | orchestrator | Monday 08 September 2025 00:21:03 +0000 (0:00:01.143) 0:00:04.055 ****** 2025-09-08 00:22:51.897567 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.897578 | orchestrator | 2025-09-08 00:22:51.897589 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-08 00:22:51.897599 | orchestrator | Monday 08 September 2025 00:21:04 +0000 (0:00:00.376) 0:00:04.431 ****** 2025-09-08 00:22:51.897610 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.897621 | orchestrator | 2025-09-08 00:22:51.897632 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-08 00:22:51.897713 | orchestrator | Monday 08 September 2025 00:21:05 +0000 (0:00:00.927) 0:00:05.359 ****** 2025-09-08 00:22:51.897728 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-08 00:22:51.897742 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.897755 | orchestrator | 2025-09-08 00:22:51.897767 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-08 00:22:51.897780 | orchestrator | Monday 08 September 2025 00:21:36 +0000 (0:00:31.509) 0:00:36.869 ****** 2025-09-08 00:22:51.897793 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.897805 | orchestrator | 2025-09-08 00:22:51.897818 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-08 00:22:51.897831 | orchestrator | Monday 08 September 2025 00:21:50 +0000 (0:00:14.157) 0:00:51.027 ****** 2025-09-08 00:22:51.897844 | orchestrator | Pausing for 60 seconds 2025-09-08 00:22:51.897887 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.897900 | orchestrator | 2025-09-08 00:22:51.897913 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-08 00:22:51.897926 | orchestrator | Monday 08 September 2025 00:22:50 +0000 (0:01:00.077) 0:01:51.104 ****** 2025-09-08 00:22:51.897938 | orchestrator | ok: [testbed-manager] 2025-09-08 00:22:51.897951 | orchestrator | 2025-09-08 00:22:51.897963 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-08 00:22:51.897975 | orchestrator | Monday 08 September 2025 00:22:50 +0000 (0:00:00.061) 0:01:51.166 ****** 2025-09-08 00:22:51.897987 | orchestrator | changed: [testbed-manager] 2025-09-08 00:22:51.898000 | orchestrator | 2025-09-08 00:22:51.898012 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:22:51.898095 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:22:51.898109 | orchestrator | 2025-09-08 00:22:51.898120 | orchestrator | 2025-09-08 00:22:51.898131 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:22:51.898142 | orchestrator | Monday 08 September 2025 00:22:51 +0000 (0:00:00.694) 0:01:51.860 ****** 2025-09-08 00:22:51.898152 | orchestrator | =============================================================================== 2025-09-08 00:22:51.898163 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-08 00:22:51.898173 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.51s 2025-09-08 00:22:51.898184 | orchestrator | osism.services.squid : Restart squid service --------------------------- 14.16s 2025-09-08 00:22:51.898195 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.46s 2025-09-08 00:22:51.898206 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2025-09-08 00:22:51.898216 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.14s 2025-09-08 00:22:51.898227 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2025-09-08 00:22:51.898238 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.69s 2025-09-08 00:22:51.898248 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-09-08 00:22:51.898259 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-09-08 00:22:51.898270 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-08 00:22:52.207740 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-08 00:22:52.207825 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-09-08 00:22:52.212387 | orchestrator | ++ semver 9.2.0 9.0.0 2025-09-08 00:22:52.281686 | orchestrator | + [[ 1 -lt 0 ]] 2025-09-08 00:22:52.282622 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-08 00:23:04.329990 | orchestrator | 2025-09-08 00:23:04 | INFO  | Task 7ee84ada-f037-46e5-9887-ba0a1fe032c3 (operator) was prepared for execution. 2025-09-08 00:23:04.330195 | orchestrator | 2025-09-08 00:23:04 | INFO  | It takes a moment until task 7ee84ada-f037-46e5-9887-ba0a1fe032c3 (operator) has been started and output is visible here. 2025-09-08 00:23:21.835658 | orchestrator | 2025-09-08 00:23:21.835795 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-08 00:23:21.835813 | orchestrator | 2025-09-08 00:23:21.835826 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 00:23:21.835837 | orchestrator | Monday 08 September 2025 00:23:08 +0000 (0:00:00.156) 0:00:00.156 ****** 2025-09-08 00:23:21.835848 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:21.835861 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:21.835872 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:21.835882 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:21.835893 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:21.835936 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:21.835947 | orchestrator | 2025-09-08 00:23:21.835958 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-08 00:23:21.835969 | orchestrator | Monday 08 September 2025 00:23:12 +0000 (0:00:03.759) 0:00:03.916 ****** 2025-09-08 00:23:21.835980 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:21.835991 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:21.836001 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:21.836012 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:21.836022 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:21.836033 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:21.836043 | orchestrator | 2025-09-08 00:23:21.836055 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-08 00:23:21.836065 | orchestrator | 2025-09-08 00:23:21.836076 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-08 00:23:21.836087 | orchestrator | Monday 08 September 2025 00:23:12 +0000 (0:00:00.782) 0:00:04.698 ****** 2025-09-08 00:23:21.836098 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:21.836108 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:21.836119 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:21.836129 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:21.836140 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:21.836152 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:21.836164 | orchestrator | 2025-09-08 00:23:21.836177 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-08 00:23:21.836190 | orchestrator | Monday 08 September 2025 00:23:13 +0000 (0:00:00.169) 0:00:04.868 ****** 2025-09-08 00:23:21.836202 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:23:21.836214 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:23:21.836226 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:23:21.836239 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:23:21.836251 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:23:21.836262 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:23:21.836274 | orchestrator | 2025-09-08 00:23:21.836286 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-08 00:23:21.836299 | orchestrator | Monday 08 September 2025 00:23:13 +0000 (0:00:00.198) 0:00:05.067 ****** 2025-09-08 00:23:21.836312 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:21.836325 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:21.836338 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:21.836351 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:21.836363 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:21.836375 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:21.836387 | orchestrator | 2025-09-08 00:23:21.836399 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-08 00:23:21.836411 | orchestrator | Monday 08 September 2025 00:23:14 +0000 (0:00:00.709) 0:00:05.776 ****** 2025-09-08 00:23:21.836424 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:21.836436 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:21.836449 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:21.836461 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:21.836474 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:21.836486 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:21.836500 | orchestrator | 2025-09-08 00:23:21.836511 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-08 00:23:21.836521 | orchestrator | Monday 08 September 2025 00:23:14 +0000 (0:00:00.867) 0:00:06.643 ****** 2025-09-08 00:23:21.836532 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-08 00:23:21.836543 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-08 00:23:21.836554 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-08 00:23:21.836564 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-08 00:23:21.836575 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-08 00:23:21.836586 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-08 00:23:21.836596 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-08 00:23:21.836615 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-08 00:23:21.836626 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-08 00:23:21.836665 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-08 00:23:21.836677 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-08 00:23:21.836688 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-08 00:23:21.836699 | orchestrator | 2025-09-08 00:23:21.836714 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-08 00:23:21.836725 | orchestrator | Monday 08 September 2025 00:23:17 +0000 (0:00:02.234) 0:00:08.877 ****** 2025-09-08 00:23:21.836736 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:21.836746 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:21.836757 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:21.836767 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:21.836778 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:21.836788 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:21.836799 | orchestrator | 2025-09-08 00:23:21.836810 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-08 00:23:21.836822 | orchestrator | Monday 08 September 2025 00:23:18 +0000 (0:00:01.308) 0:00:10.186 ****** 2025-09-08 00:23:21.836832 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-08 00:23:21.836843 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-08 00:23:21.836854 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-08 00:23:21.836865 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:21.836893 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:21.836904 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:21.836915 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:21.836925 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:21.836936 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-08 00:23:21.836947 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:21.836957 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:21.836989 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:21.837001 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:21.837011 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:21.837022 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-08 00:23:21.837033 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:21.837048 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:21.837059 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:21.837070 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:21.837081 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:21.837091 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-08 00:23:21.837102 | orchestrator | 2025-09-08 00:23:21.837113 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-08 00:23:21.837125 | orchestrator | Monday 08 September 2025 00:23:19 +0000 (0:00:01.292) 0:00:11.479 ****** 2025-09-08 00:23:21.837135 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:21.837146 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:21.837156 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:21.837167 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:21.837178 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:21.837196 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:21.837206 | orchestrator | 2025-09-08 00:23:21.837217 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-08 00:23:21.837228 | orchestrator | Monday 08 September 2025 00:23:19 +0000 (0:00:00.172) 0:00:11.651 ****** 2025-09-08 00:23:21.837238 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:21.837249 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:21.837259 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:21.837270 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:21.837281 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:21.837291 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:21.837302 | orchestrator | 2025-09-08 00:23:21.837313 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-08 00:23:21.837323 | orchestrator | Monday 08 September 2025 00:23:20 +0000 (0:00:00.568) 0:00:12.220 ****** 2025-09-08 00:23:21.837334 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:21.837344 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:21.837354 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:21.837365 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:21.837376 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:21.837386 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:21.837397 | orchestrator | 2025-09-08 00:23:21.837407 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-08 00:23:21.837418 | orchestrator | Monday 08 September 2025 00:23:20 +0000 (0:00:00.187) 0:00:12.407 ****** 2025-09-08 00:23:21.837429 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-08 00:23:21.837439 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:21.837450 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:23:21.837460 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:21.837471 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:23:21.837481 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:23:21.837492 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:21.837502 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:21.837513 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-08 00:23:21.837523 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:23:21.837534 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:21.837544 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:21.837555 | orchestrator | 2025-09-08 00:23:21.837566 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-08 00:23:21.837576 | orchestrator | Monday 08 September 2025 00:23:21 +0000 (0:00:00.680) 0:00:13.088 ****** 2025-09-08 00:23:21.837587 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:21.837597 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:21.837608 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:21.837619 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:21.837629 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:21.837657 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:21.837668 | orchestrator | 2025-09-08 00:23:21.837679 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-08 00:23:21.837690 | orchestrator | Monday 08 September 2025 00:23:21 +0000 (0:00:00.176) 0:00:13.265 ****** 2025-09-08 00:23:21.837700 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:21.837711 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:21.837722 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:21.837733 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:21.837743 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:21.837754 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:21.837764 | orchestrator | 2025-09-08 00:23:21.837775 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-08 00:23:21.837786 | orchestrator | Monday 08 September 2025 00:23:21 +0000 (0:00:00.176) 0:00:13.441 ****** 2025-09-08 00:23:21.837796 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:21.837814 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:21.837824 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:21.837835 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:21.837853 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:22.921982 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:22.922162 | orchestrator | 2025-09-08 00:23:22.922181 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-08 00:23:22.922194 | orchestrator | Monday 08 September 2025 00:23:21 +0000 (0:00:00.141) 0:00:13.583 ****** 2025-09-08 00:23:22.922205 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:23:22.922216 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:23:22.922227 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:23:22.922237 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:23:22.922248 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:23:22.922259 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:23:22.922269 | orchestrator | 2025-09-08 00:23:22.922280 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-08 00:23:22.922291 | orchestrator | Monday 08 September 2025 00:23:22 +0000 (0:00:00.644) 0:00:14.227 ****** 2025-09-08 00:23:22.922301 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:23:22.922312 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:23:22.922322 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:23:22.922333 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:23:22.922344 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:23:22.922354 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:23:22.922365 | orchestrator | 2025-09-08 00:23:22.922375 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:23:22.922388 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:22.922401 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:22.922412 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:22.922422 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:22.922433 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:22.922444 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:23:22.922454 | orchestrator | 2025-09-08 00:23:22.922465 | orchestrator | 2025-09-08 00:23:22.922476 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:23:22.922487 | orchestrator | Monday 08 September 2025 00:23:22 +0000 (0:00:00.211) 0:00:14.438 ****** 2025-09-08 00:23:22.922498 | orchestrator | =============================================================================== 2025-09-08 00:23:22.922511 | orchestrator | Gathering Facts --------------------------------------------------------- 3.76s 2025-09-08 00:23:22.922523 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 2.23s 2025-09-08 00:23:22.922535 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2025-09-08 00:23:22.922547 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-09-08 00:23:22.922560 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2025-09-08 00:23:22.922572 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2025-09-08 00:23:22.922585 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.71s 2025-09-08 00:23:22.922631 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2025-09-08 00:23:22.922672 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-09-08 00:23:22.922686 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-09-08 00:23:22.922698 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-09-08 00:23:22.922711 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-09-08 00:23:22.922723 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-09-08 00:23:22.922736 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-09-08 00:23:22.922748 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-09-08 00:23:22.922761 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2025-09-08 00:23:22.922773 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-09-08 00:23:22.922785 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-09-08 00:23:23.210866 | orchestrator | + osism apply --environment custom facts 2025-09-08 00:23:25.008699 | orchestrator | 2025-09-08 00:23:25 | INFO  | Trying to run play facts in environment custom 2025-09-08 00:23:35.212749 | orchestrator | 2025-09-08 00:23:35 | INFO  | Task c61feebc-a8d9-407b-8ed8-fdb29feb188e (facts) was prepared for execution. 2025-09-08 00:23:35.212883 | orchestrator | 2025-09-08 00:23:35 | INFO  | It takes a moment until task c61feebc-a8d9-407b-8ed8-fdb29feb188e (facts) has been started and output is visible here. 2025-09-08 00:24:19.712187 | orchestrator | 2025-09-08 00:24:19.712311 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-08 00:24:19.712330 | orchestrator | 2025-09-08 00:24:19.712343 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-08 00:24:19.712355 | orchestrator | Monday 08 September 2025 00:23:39 +0000 (0:00:00.077) 0:00:00.077 ****** 2025-09-08 00:24:19.712367 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:19.712379 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:19.712390 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:19.712401 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:19.712412 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:19.712423 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:19.712434 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:19.712444 | orchestrator | 2025-09-08 00:24:19.712455 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-08 00:24:19.712487 | orchestrator | Monday 08 September 2025 00:23:40 +0000 (0:00:01.404) 0:00:01.482 ****** 2025-09-08 00:24:19.712498 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:19.712509 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:19.712520 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:19.712530 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:19.712549 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:19.712559 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:19.712570 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:19.712581 | orchestrator | 2025-09-08 00:24:19.712591 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-08 00:24:19.712602 | orchestrator | 2025-09-08 00:24:19.712613 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-08 00:24:19.712624 | orchestrator | Monday 08 September 2025 00:23:41 +0000 (0:00:01.195) 0:00:02.678 ****** 2025-09-08 00:24:19.712634 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.712675 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.712686 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.712697 | orchestrator | 2025-09-08 00:24:19.712708 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-08 00:24:19.712722 | orchestrator | Monday 08 September 2025 00:23:41 +0000 (0:00:00.103) 0:00:02.781 ****** 2025-09-08 00:24:19.712758 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.712771 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.712784 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.712795 | orchestrator | 2025-09-08 00:24:19.712808 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-08 00:24:19.712821 | orchestrator | Monday 08 September 2025 00:23:42 +0000 (0:00:00.190) 0:00:02.972 ****** 2025-09-08 00:24:19.712833 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.712845 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.712857 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.712869 | orchestrator | 2025-09-08 00:24:19.712882 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-08 00:24:19.712894 | orchestrator | Monday 08 September 2025 00:23:42 +0000 (0:00:00.205) 0:00:03.177 ****** 2025-09-08 00:24:19.712907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:19.712921 | orchestrator | 2025-09-08 00:24:19.712934 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-08 00:24:19.712947 | orchestrator | Monday 08 September 2025 00:23:42 +0000 (0:00:00.156) 0:00:03.334 ****** 2025-09-08 00:24:19.712959 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.712971 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.712984 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.712996 | orchestrator | 2025-09-08 00:24:19.713009 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-08 00:24:19.713021 | orchestrator | Monday 08 September 2025 00:23:42 +0000 (0:00:00.432) 0:00:03.767 ****** 2025-09-08 00:24:19.713033 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:19.713046 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:19.713059 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:19.713070 | orchestrator | 2025-09-08 00:24:19.713081 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-08 00:24:19.713092 | orchestrator | Monday 08 September 2025 00:23:42 +0000 (0:00:00.123) 0:00:03.890 ****** 2025-09-08 00:24:19.713102 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:19.713113 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:19.713124 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:19.713134 | orchestrator | 2025-09-08 00:24:19.713145 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-08 00:24:19.713156 | orchestrator | Monday 08 September 2025 00:23:43 +0000 (0:00:01.052) 0:00:04.942 ****** 2025-09-08 00:24:19.713166 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.713177 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.713188 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.713198 | orchestrator | 2025-09-08 00:24:19.713209 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-08 00:24:19.713220 | orchestrator | Monday 08 September 2025 00:23:44 +0000 (0:00:00.470) 0:00:05.413 ****** 2025-09-08 00:24:19.713230 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:19.713241 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:19.713252 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:19.713262 | orchestrator | 2025-09-08 00:24:19.713273 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-08 00:24:19.713284 | orchestrator | Monday 08 September 2025 00:23:45 +0000 (0:00:01.096) 0:00:06.510 ****** 2025-09-08 00:24:19.713294 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:19.713305 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:19.713316 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:19.713326 | orchestrator | 2025-09-08 00:24:19.713337 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-08 00:24:19.713348 | orchestrator | Monday 08 September 2025 00:24:02 +0000 (0:00:17.060) 0:00:23.571 ****** 2025-09-08 00:24:19.713358 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:19.713377 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:19.713389 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:19.713399 | orchestrator | 2025-09-08 00:24:19.713410 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-08 00:24:19.713439 | orchestrator | Monday 08 September 2025 00:24:02 +0000 (0:00:00.112) 0:00:23.683 ****** 2025-09-08 00:24:19.713451 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:19.713462 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:19.713473 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:19.713483 | orchestrator | 2025-09-08 00:24:19.713495 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-08 00:24:19.713505 | orchestrator | Monday 08 September 2025 00:24:10 +0000 (0:00:07.824) 0:00:31.507 ****** 2025-09-08 00:24:19.713516 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.713527 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.713538 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.713549 | orchestrator | 2025-09-08 00:24:19.713560 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-08 00:24:19.713570 | orchestrator | Monday 08 September 2025 00:24:10 +0000 (0:00:00.458) 0:00:31.966 ****** 2025-09-08 00:24:19.713581 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-08 00:24:19.713592 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-08 00:24:19.713603 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-08 00:24:19.713619 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-08 00:24:19.713630 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-08 00:24:19.713659 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-08 00:24:19.713670 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-08 00:24:19.713680 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-08 00:24:19.713691 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-08 00:24:19.713702 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-08 00:24:19.713713 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-08 00:24:19.713723 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-08 00:24:19.713734 | orchestrator | 2025-09-08 00:24:19.713745 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-08 00:24:19.713756 | orchestrator | Monday 08 September 2025 00:24:14 +0000 (0:00:03.548) 0:00:35.514 ****** 2025-09-08 00:24:19.713766 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.713777 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.713788 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.713798 | orchestrator | 2025-09-08 00:24:19.713809 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:24:19.713820 | orchestrator | 2025-09-08 00:24:19.713831 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:24:19.713842 | orchestrator | Monday 08 September 2025 00:24:15 +0000 (0:00:01.310) 0:00:36.824 ****** 2025-09-08 00:24:19.713852 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:19.713863 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:19.713874 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:19.713885 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:19.713895 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:19.713906 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:19.713916 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:19.713927 | orchestrator | 2025-09-08 00:24:19.713938 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:24:19.713949 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:19.713961 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:19.713979 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:19.713991 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:24:19.714001 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:24:19.714063 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:24:19.714077 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:24:19.714087 | orchestrator | 2025-09-08 00:24:19.714098 | orchestrator | 2025-09-08 00:24:19.714109 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:24:19.714120 | orchestrator | Monday 08 September 2025 00:24:19 +0000 (0:00:03.827) 0:00:40.652 ****** 2025-09-08 00:24:19.714130 | orchestrator | =============================================================================== 2025-09-08 00:24:19.714141 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.06s 2025-09-08 00:24:19.714152 | orchestrator | Install required packages (Debian) -------------------------------------- 7.82s 2025-09-08 00:24:19.714162 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.83s 2025-09-08 00:24:19.714173 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2025-09-08 00:24:19.714184 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-09-08 00:24:19.714194 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2025-09-08 00:24:19.714213 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2025-09-08 00:24:19.935483 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2025-09-08 00:24:19.935542 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-09-08 00:24:19.935554 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-09-08 00:24:19.935565 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2025-09-08 00:24:19.935575 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-09-08 00:24:19.935586 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-09-08 00:24:19.935597 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-09-08 00:24:19.935608 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-09-08 00:24:19.935619 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-09-08 00:24:19.935630 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-09-08 00:24:19.935687 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-08 00:24:20.221112 | orchestrator | + osism apply bootstrap 2025-09-08 00:24:32.225548 | orchestrator | 2025-09-08 00:24:32 | INFO  | Task 3202bd60-b26a-4e31-89d2-719564322db0 (bootstrap) was prepared for execution. 2025-09-08 00:24:32.225704 | orchestrator | 2025-09-08 00:24:32 | INFO  | It takes a moment until task 3202bd60-b26a-4e31-89d2-719564322db0 (bootstrap) has been started and output is visible here. 2025-09-08 00:24:48.170734 | orchestrator | 2025-09-08 00:24:48.170861 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-08 00:24:48.170879 | orchestrator | 2025-09-08 00:24:48.170891 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-08 00:24:48.170924 | orchestrator | Monday 08 September 2025 00:24:36 +0000 (0:00:00.155) 0:00:00.155 ****** 2025-09-08 00:24:48.170935 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:48.170947 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:48.170958 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:48.170969 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:48.170980 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:48.170991 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:48.171002 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:48.171012 | orchestrator | 2025-09-08 00:24:48.171041 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:24:48.171052 | orchestrator | 2025-09-08 00:24:48.171063 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:24:48.171074 | orchestrator | Monday 08 September 2025 00:24:36 +0000 (0:00:00.255) 0:00:00.410 ****** 2025-09-08 00:24:48.171085 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:48.171096 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:48.171106 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:48.171117 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:48.171128 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:48.171138 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:48.171149 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:48.171159 | orchestrator | 2025-09-08 00:24:48.171170 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-08 00:24:48.171180 | orchestrator | 2025-09-08 00:24:48.171191 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:24:48.171202 | orchestrator | Monday 08 September 2025 00:24:40 +0000 (0:00:03.674) 0:00:04.085 ****** 2025-09-08 00:24:48.171216 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-08 00:24:48.171229 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-08 00:24:48.171241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-08 00:24:48.171253 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-08 00:24:48.171266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:24:48.171279 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-08 00:24:48.171291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:24:48.171304 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-08 00:24:48.171316 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-08 00:24:48.171329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:24:48.171341 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-08 00:24:48.171354 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-08 00:24:48.171366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-08 00:24:48.171379 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-08 00:24:48.171392 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-08 00:24:48.171405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-08 00:24:48.171417 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-08 00:24:48.171429 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-08 00:24:48.171441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-08 00:24:48.171453 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-08 00:24:48.171466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-08 00:24:48.171480 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:48.171492 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-08 00:24:48.171505 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-08 00:24:48.171518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-08 00:24:48.171539 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:48.171551 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:48.171564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:24:48.171575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-08 00:24:48.171586 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-08 00:24:48.171597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:24:48.171607 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-08 00:24:48.171618 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-08 00:24:48.171629 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-08 00:24:48.171661 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-08 00:24:48.171673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:24:48.171683 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-08 00:24:48.171694 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-08 00:24:48.171704 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-08 00:24:48.171719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-08 00:24:48.171730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:24:48.171741 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-08 00:24:48.171751 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-08 00:24:48.171762 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:48.171772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:24:48.171783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-08 00:24:48.171813 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-08 00:24:48.171825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:24:48.171836 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-08 00:24:48.171846 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:48.171857 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-08 00:24:48.171867 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-08 00:24:48.171878 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-08 00:24:48.171888 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:48.171899 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-08 00:24:48.171910 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:48.171920 | orchestrator | 2025-09-08 00:24:48.171931 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-08 00:24:48.171942 | orchestrator | 2025-09-08 00:24:48.171952 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-08 00:24:48.171963 | orchestrator | Monday 08 September 2025 00:24:40 +0000 (0:00:00.445) 0:00:04.530 ****** 2025-09-08 00:24:48.171973 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:48.171984 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:48.171995 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:48.172006 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:48.172017 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:48.172027 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:48.172038 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:48.172049 | orchestrator | 2025-09-08 00:24:48.172060 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-08 00:24:48.172070 | orchestrator | Monday 08 September 2025 00:24:41 +0000 (0:00:01.219) 0:00:05.750 ****** 2025-09-08 00:24:48.172081 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:48.172092 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:24:48.172102 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:24:48.172113 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:24:48.172124 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:24:48.172141 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:24:48.172152 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:24:48.172163 | orchestrator | 2025-09-08 00:24:48.172174 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-08 00:24:48.172185 | orchestrator | Monday 08 September 2025 00:24:43 +0000 (0:00:01.233) 0:00:06.984 ****** 2025-09-08 00:24:48.172196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:48.172210 | orchestrator | 2025-09-08 00:24:48.172221 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-08 00:24:48.172232 | orchestrator | Monday 08 September 2025 00:24:43 +0000 (0:00:00.278) 0:00:07.263 ****** 2025-09-08 00:24:48.172242 | orchestrator | changed: [testbed-manager] 2025-09-08 00:24:48.172253 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:48.172264 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:48.172275 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:48.172285 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:48.172296 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:48.172306 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:48.172317 | orchestrator | 2025-09-08 00:24:48.172327 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-08 00:24:48.172338 | orchestrator | Monday 08 September 2025 00:24:45 +0000 (0:00:02.239) 0:00:09.502 ****** 2025-09-08 00:24:48.172349 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:48.172361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:24:48.172373 | orchestrator | 2025-09-08 00:24:48.172384 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-08 00:24:48.172394 | orchestrator | Monday 08 September 2025 00:24:45 +0000 (0:00:00.308) 0:00:09.810 ****** 2025-09-08 00:24:48.172405 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:48.172416 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:48.172426 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:48.172437 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:48.172447 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:48.172457 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:48.172468 | orchestrator | 2025-09-08 00:24:48.172479 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-08 00:24:48.172489 | orchestrator | Monday 08 September 2025 00:24:47 +0000 (0:00:01.057) 0:00:10.868 ****** 2025-09-08 00:24:48.172500 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:48.172510 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:24:48.172521 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:24:48.172532 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:24:48.172542 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:24:48.172553 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:24:48.172563 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:24:48.172574 | orchestrator | 2025-09-08 00:24:48.172584 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-08 00:24:48.172595 | orchestrator | Monday 08 September 2025 00:24:47 +0000 (0:00:00.552) 0:00:11.421 ****** 2025-09-08 00:24:48.172606 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:48.172616 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:24:48.172627 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:24:48.172655 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:24:48.172666 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:24:48.172677 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:24:48.172687 | orchestrator | ok: [testbed-manager] 2025-09-08 00:24:48.172698 | orchestrator | 2025-09-08 00:24:48.172709 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-08 00:24:48.172727 | orchestrator | Monday 08 September 2025 00:24:48 +0000 (0:00:00.456) 0:00:11.877 ****** 2025-09-08 00:24:48.172737 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:24:48.172748 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:24:48.172765 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:25:00.392005 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:25:00.392129 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:25:00.392144 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:25:00.392157 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:25:00.392168 | orchestrator | 2025-09-08 00:25:00.392180 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-08 00:25:00.392192 | orchestrator | Monday 08 September 2025 00:24:48 +0000 (0:00:00.198) 0:00:12.076 ****** 2025-09-08 00:25:00.392205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:00.392234 | orchestrator | 2025-09-08 00:25:00.392245 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-08 00:25:00.392258 | orchestrator | Monday 08 September 2025 00:24:48 +0000 (0:00:00.303) 0:00:12.379 ****** 2025-09-08 00:25:00.392269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:00.392280 | orchestrator | 2025-09-08 00:25:00.392291 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-08 00:25:00.392302 | orchestrator | Monday 08 September 2025 00:24:48 +0000 (0:00:00.324) 0:00:12.704 ****** 2025-09-08 00:25:00.392312 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.392325 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.392336 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.392346 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.392357 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.392368 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.392378 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.392389 | orchestrator | 2025-09-08 00:25:00.392400 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-08 00:25:00.392411 | orchestrator | Monday 08 September 2025 00:24:50 +0000 (0:00:01.477) 0:00:14.181 ****** 2025-09-08 00:25:00.392422 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:25:00.392433 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:25:00.392443 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:25:00.392454 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:25:00.392464 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:25:00.392475 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:25:00.392485 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:25:00.392499 | orchestrator | 2025-09-08 00:25:00.392511 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-08 00:25:00.392524 | orchestrator | Monday 08 September 2025 00:24:50 +0000 (0:00:00.211) 0:00:14.393 ****** 2025-09-08 00:25:00.392537 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.392550 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.392562 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.392575 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.392587 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.392600 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.392613 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.392625 | orchestrator | 2025-09-08 00:25:00.392663 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-08 00:25:00.392677 | orchestrator | Monday 08 September 2025 00:24:51 +0000 (0:00:00.531) 0:00:14.924 ****** 2025-09-08 00:25:00.392691 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:25:00.392725 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:25:00.392738 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:25:00.392751 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:25:00.392763 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:25:00.392775 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:25:00.392787 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:25:00.392800 | orchestrator | 2025-09-08 00:25:00.392813 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-08 00:25:00.392827 | orchestrator | Monday 08 September 2025 00:24:51 +0000 (0:00:00.233) 0:00:15.157 ****** 2025-09-08 00:25:00.392840 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.392851 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:00.392902 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:00.392915 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:00.392925 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:00.392936 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:00.392946 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:00.392957 | orchestrator | 2025-09-08 00:25:00.392968 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-08 00:25:00.392979 | orchestrator | Monday 08 September 2025 00:24:51 +0000 (0:00:00.544) 0:00:15.702 ****** 2025-09-08 00:25:00.392989 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.393000 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:00.393010 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:00.393021 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:00.393031 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:00.393042 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:00.393052 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:00.393063 | orchestrator | 2025-09-08 00:25:00.393074 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-08 00:25:00.393089 | orchestrator | Monday 08 September 2025 00:24:52 +0000 (0:00:01.122) 0:00:16.825 ****** 2025-09-08 00:25:00.393100 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.393111 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.393121 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.393132 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.393143 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.393153 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.393164 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.393174 | orchestrator | 2025-09-08 00:25:00.393185 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-08 00:25:00.393196 | orchestrator | Monday 08 September 2025 00:24:54 +0000 (0:00:01.184) 0:00:18.009 ****** 2025-09-08 00:25:00.393229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:00.393241 | orchestrator | 2025-09-08 00:25:00.393252 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-08 00:25:00.393263 | orchestrator | Monday 08 September 2025 00:24:54 +0000 (0:00:00.409) 0:00:18.419 ****** 2025-09-08 00:25:00.393273 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:25:00.393284 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:00.393295 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:00.393306 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:00.393316 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:00.393327 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:00.393338 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:00.393348 | orchestrator | 2025-09-08 00:25:00.393359 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-08 00:25:00.393370 | orchestrator | Monday 08 September 2025 00:24:55 +0000 (0:00:01.238) 0:00:19.658 ****** 2025-09-08 00:25:00.393380 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.393400 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.393410 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.393421 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.393432 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.393443 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.393453 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.393464 | orchestrator | 2025-09-08 00:25:00.393474 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-08 00:25:00.393485 | orchestrator | Monday 08 September 2025 00:24:56 +0000 (0:00:00.231) 0:00:19.890 ****** 2025-09-08 00:25:00.393496 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.393506 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.393517 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.393527 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.393538 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.393548 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.393559 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.393570 | orchestrator | 2025-09-08 00:25:00.393580 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-08 00:25:00.393591 | orchestrator | Monday 08 September 2025 00:24:56 +0000 (0:00:00.214) 0:00:20.104 ****** 2025-09-08 00:25:00.393602 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.393613 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.393623 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.393634 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.393664 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.393675 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.393685 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.393696 | orchestrator | 2025-09-08 00:25:00.393707 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-08 00:25:00.393718 | orchestrator | Monday 08 September 2025 00:24:56 +0000 (0:00:00.249) 0:00:20.353 ****** 2025-09-08 00:25:00.393729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:00.393742 | orchestrator | 2025-09-08 00:25:00.393753 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-08 00:25:00.393764 | orchestrator | Monday 08 September 2025 00:24:56 +0000 (0:00:00.331) 0:00:20.685 ****** 2025-09-08 00:25:00.393775 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.393785 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.393796 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.393807 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.393818 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.393828 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.393839 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.393850 | orchestrator | 2025-09-08 00:25:00.393860 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-08 00:25:00.393871 | orchestrator | Monday 08 September 2025 00:24:57 +0000 (0:00:00.529) 0:00:21.214 ****** 2025-09-08 00:25:00.393882 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:25:00.393893 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:25:00.393903 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:25:00.393914 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:25:00.393925 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:25:00.393935 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:25:00.393946 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:25:00.393956 | orchestrator | 2025-09-08 00:25:00.393967 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-08 00:25:00.393978 | orchestrator | Monday 08 September 2025 00:24:57 +0000 (0:00:00.251) 0:00:21.466 ****** 2025-09-08 00:25:00.393988 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.393999 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:00.394009 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:00.394121 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:00.394133 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.394144 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.394154 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.394165 | orchestrator | 2025-09-08 00:25:00.394176 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-08 00:25:00.394187 | orchestrator | Monday 08 September 2025 00:24:58 +0000 (0:00:01.119) 0:00:22.586 ****** 2025-09-08 00:25:00.394203 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.394214 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:00.394224 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:00.394235 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:00.394245 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.394256 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.394267 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:00.394277 | orchestrator | 2025-09-08 00:25:00.394288 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-08 00:25:00.394298 | orchestrator | Monday 08 September 2025 00:24:59 +0000 (0:00:00.573) 0:00:23.160 ****** 2025-09-08 00:25:00.394309 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:00.394319 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:00.394330 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:00.394341 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:00.394360 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:40.879688 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.879812 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:40.879829 | orchestrator | 2025-09-08 00:25:40.879842 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-08 00:25:40.879855 | orchestrator | Monday 08 September 2025 00:25:00 +0000 (0:00:01.050) 0:00:24.210 ****** 2025-09-08 00:25:40.879865 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.879877 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.879888 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.879899 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:40.879909 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:40.879920 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:40.879931 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:40.879941 | orchestrator | 2025-09-08 00:25:40.879952 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-08 00:25:40.879963 | orchestrator | Monday 08 September 2025 00:25:17 +0000 (0:00:17.058) 0:00:41.269 ****** 2025-09-08 00:25:40.879974 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.879985 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.879995 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.880006 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.880016 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.880027 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.880038 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.880048 | orchestrator | 2025-09-08 00:25:40.880059 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-08 00:25:40.880070 | orchestrator | Monday 08 September 2025 00:25:17 +0000 (0:00:00.248) 0:00:41.518 ****** 2025-09-08 00:25:40.880080 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.880091 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.880102 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.880112 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.880123 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.880133 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.880144 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.880154 | orchestrator | 2025-09-08 00:25:40.880165 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-08 00:25:40.880176 | orchestrator | Monday 08 September 2025 00:25:17 +0000 (0:00:00.225) 0:00:41.743 ****** 2025-09-08 00:25:40.880187 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.880200 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.880212 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.880253 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.880265 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.880277 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.880289 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.880301 | orchestrator | 2025-09-08 00:25:40.880314 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-08 00:25:40.880327 | orchestrator | Monday 08 September 2025 00:25:18 +0000 (0:00:00.252) 0:00:41.996 ****** 2025-09-08 00:25:40.880340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:40.880355 | orchestrator | 2025-09-08 00:25:40.880368 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-08 00:25:40.880380 | orchestrator | Monday 08 September 2025 00:25:18 +0000 (0:00:00.306) 0:00:42.302 ****** 2025-09-08 00:25:40.880391 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.880403 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.880415 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.880427 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.880439 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.880452 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.880464 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.880475 | orchestrator | 2025-09-08 00:25:40.880485 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-08 00:25:40.880496 | orchestrator | Monday 08 September 2025 00:25:19 +0000 (0:00:01.401) 0:00:43.704 ****** 2025-09-08 00:25:40.880507 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:40.880518 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:40.880528 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:40.880539 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:40.880549 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:40.880559 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:40.880570 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:40.880580 | orchestrator | 2025-09-08 00:25:40.880591 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-08 00:25:40.880601 | orchestrator | Monday 08 September 2025 00:25:20 +0000 (0:00:01.065) 0:00:44.770 ****** 2025-09-08 00:25:40.880612 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.880622 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.880633 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.880659 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.880670 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.880681 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.880691 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.880702 | orchestrator | 2025-09-08 00:25:40.880713 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-08 00:25:40.880723 | orchestrator | Monday 08 September 2025 00:25:21 +0000 (0:00:00.824) 0:00:45.594 ****** 2025-09-08 00:25:40.880735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:40.880748 | orchestrator | 2025-09-08 00:25:40.880759 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-08 00:25:40.880770 | orchestrator | Monday 08 September 2025 00:25:22 +0000 (0:00:00.303) 0:00:45.897 ****** 2025-09-08 00:25:40.880780 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:40.880791 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:40.880801 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:40.880812 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:40.880822 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:40.880833 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:40.880843 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:40.880854 | orchestrator | 2025-09-08 00:25:40.880890 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-08 00:25:40.880903 | orchestrator | Monday 08 September 2025 00:25:23 +0000 (0:00:01.059) 0:00:46.956 ****** 2025-09-08 00:25:40.880914 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:25:40.880924 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:25:40.880935 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:25:40.880945 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:25:40.880956 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:25:40.880966 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:25:40.880977 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:25:40.880987 | orchestrator | 2025-09-08 00:25:40.880998 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-08 00:25:40.881009 | orchestrator | Monday 08 September 2025 00:25:23 +0000 (0:00:00.301) 0:00:47.258 ****** 2025-09-08 00:25:40.881019 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:40.881030 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:40.881040 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:40.881051 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:40.881061 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:40.881072 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:40.881082 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:40.881093 | orchestrator | 2025-09-08 00:25:40.881103 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-08 00:25:40.881114 | orchestrator | Monday 08 September 2025 00:25:35 +0000 (0:00:12.430) 0:00:59.688 ****** 2025-09-08 00:25:40.881125 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.881136 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.881146 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.881157 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.881167 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.881178 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.881188 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.881199 | orchestrator | 2025-09-08 00:25:40.881210 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-08 00:25:40.881220 | orchestrator | Monday 08 September 2025 00:25:36 +0000 (0:00:00.808) 0:01:00.497 ****** 2025-09-08 00:25:40.881231 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.881241 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.881252 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.881262 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.881273 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.881283 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.881294 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.881304 | orchestrator | 2025-09-08 00:25:40.881315 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-08 00:25:40.881325 | orchestrator | Monday 08 September 2025 00:25:37 +0000 (0:00:00.947) 0:01:01.444 ****** 2025-09-08 00:25:40.881336 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.881346 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.881357 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.881367 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.881378 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.881389 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.881399 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.881410 | orchestrator | 2025-09-08 00:25:40.881421 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-08 00:25:40.881431 | orchestrator | Monday 08 September 2025 00:25:37 +0000 (0:00:00.260) 0:01:01.705 ****** 2025-09-08 00:25:40.881442 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.881453 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.881463 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.881474 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.881484 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.881494 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.881505 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.881522 | orchestrator | 2025-09-08 00:25:40.881551 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-08 00:25:40.881562 | orchestrator | Monday 08 September 2025 00:25:38 +0000 (0:00:00.234) 0:01:01.940 ****** 2025-09-08 00:25:40.881573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:25:40.881585 | orchestrator | 2025-09-08 00:25:40.881595 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-08 00:25:40.881606 | orchestrator | Monday 08 September 2025 00:25:38 +0000 (0:00:00.301) 0:01:02.242 ****** 2025-09-08 00:25:40.881616 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.881627 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.881652 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.881663 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.881673 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.881684 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.881694 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.881705 | orchestrator | 2025-09-08 00:25:40.881715 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-08 00:25:40.881726 | orchestrator | Monday 08 September 2025 00:25:40 +0000 (0:00:01.632) 0:01:03.874 ****** 2025-09-08 00:25:40.881736 | orchestrator | changed: [testbed-manager] 2025-09-08 00:25:40.881747 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:25:40.881758 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:25:40.881768 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:25:40.881778 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:25:40.881794 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:25:40.881805 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:25:40.881816 | orchestrator | 2025-09-08 00:25:40.881827 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-08 00:25:40.881837 | orchestrator | Monday 08 September 2025 00:25:40 +0000 (0:00:00.591) 0:01:04.466 ****** 2025-09-08 00:25:40.881848 | orchestrator | ok: [testbed-manager] 2025-09-08 00:25:40.881859 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:25:40.881870 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:25:40.881880 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:25:40.881891 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:25:40.881901 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:25:40.881911 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:25:40.881922 | orchestrator | 2025-09-08 00:25:40.881939 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-08 00:28:00.384188 | orchestrator | Monday 08 September 2025 00:25:40 +0000 (0:00:00.239) 0:01:04.705 ****** 2025-09-08 00:28:00.384336 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:00.384353 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:00.384365 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:00.384376 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:00.384387 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:00.384398 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:00.384409 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:00.384420 | orchestrator | 2025-09-08 00:28:00.384432 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-08 00:28:00.384444 | orchestrator | Monday 08 September 2025 00:25:42 +0000 (0:00:01.206) 0:01:05.911 ****** 2025-09-08 00:28:00.384455 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:00.384467 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:00.384477 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:00.384488 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:00.384499 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:00.384510 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:00.384520 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:00.384531 | orchestrator | 2025-09-08 00:28:00.384542 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-08 00:28:00.384587 | orchestrator | Monday 08 September 2025 00:25:43 +0000 (0:00:01.560) 0:01:07.472 ****** 2025-09-08 00:28:00.384598 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:00.384609 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:00.384620 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:00.384666 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:00.384678 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:00.384690 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:00.384702 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:00.384715 | orchestrator | 2025-09-08 00:28:00.384728 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-08 00:28:00.384741 | orchestrator | Monday 08 September 2025 00:25:45 +0000 (0:00:02.273) 0:01:09.745 ****** 2025-09-08 00:28:00.384753 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:00.384765 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:00.384778 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:00.384791 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:00.384803 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:00.384816 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:00.384828 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:00.384841 | orchestrator | 2025-09-08 00:28:00.384853 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-08 00:28:00.384867 | orchestrator | Monday 08 September 2025 00:26:23 +0000 (0:00:37.445) 0:01:47.191 ****** 2025-09-08 00:28:00.384879 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:00.384892 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:00.384905 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:00.384918 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:00.384931 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:00.384944 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:00.384956 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:00.384969 | orchestrator | 2025-09-08 00:28:00.384981 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-08 00:28:00.384994 | orchestrator | Monday 08 September 2025 00:27:45 +0000 (0:01:22.011) 0:03:09.202 ****** 2025-09-08 00:28:00.385006 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:00.385019 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:00.385032 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:00.385044 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:00.385055 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:00.385066 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:00.385077 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:00.385088 | orchestrator | 2025-09-08 00:28:00.385099 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-08 00:28:00.385111 | orchestrator | Monday 08 September 2025 00:27:47 +0000 (0:00:01.727) 0:03:10.929 ****** 2025-09-08 00:28:00.385121 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:00.385132 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:00.385142 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:00.385153 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:00.385164 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:00.385174 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:00.385185 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:00.385196 | orchestrator | 2025-09-08 00:28:00.385206 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-08 00:28:00.385217 | orchestrator | Monday 08 September 2025 00:27:59 +0000 (0:00:12.031) 0:03:22.961 ****** 2025-09-08 00:28:00.385247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-08 00:28:00.385281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-08 00:28:00.385331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-08 00:28:00.385347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-08 00:28:00.385358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-08 00:28:00.385370 | orchestrator | 2025-09-08 00:28:00.385381 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-08 00:28:00.385392 | orchestrator | Monday 08 September 2025 00:27:59 +0000 (0:00:00.430) 0:03:23.391 ****** 2025-09-08 00:28:00.385403 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:28:00.385414 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:00.385425 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:28:00.385436 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:28:00.385447 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:00.385458 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:00.385469 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-08 00:28:00.385480 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:00.385491 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:28:00.385503 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:28:00.385514 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:28:00.385524 | orchestrator | 2025-09-08 00:28:00.385535 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-08 00:28:00.385546 | orchestrator | Monday 08 September 2025 00:28:00 +0000 (0:00:00.619) 0:03:24.011 ****** 2025-09-08 00:28:00.385557 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:28:00.385570 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:28:00.385581 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:28:00.385592 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:28:00.385603 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:28:00.385614 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:28:00.385654 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:28:00.385665 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:28:00.385676 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:28:00.385687 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:28:00.385698 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:00.385709 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:28:00.385720 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:28:00.385731 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:28:00.385742 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:28:00.385753 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:28:00.385764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:28:00.385775 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:28:00.385786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:28:00.385797 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:28:00.385808 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:28:00.385825 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:28:08.251365 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:28:08.251497 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:28:08.251513 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:28:08.251527 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:28:08.251539 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:08.251551 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:28:08.251562 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:28:08.251574 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:28:08.251585 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:28:08.251596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:28:08.251608 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:08.251619 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-08 00:28:08.251684 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-08 00:28:08.251697 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-08 00:28:08.251708 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-08 00:28:08.251719 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-08 00:28:08.251729 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-08 00:28:08.251767 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-08 00:28:08.251778 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-08 00:28:08.251789 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-08 00:28:08.251800 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-08 00:28:08.251811 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:08.251822 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-08 00:28:08.251833 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-08 00:28:08.251843 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-08 00:28:08.251854 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-08 00:28:08.251865 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-08 00:28:08.251879 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-08 00:28:08.251892 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-08 00:28:08.251904 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-08 00:28:08.251917 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-08 00:28:08.251930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-08 00:28:08.251942 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-08 00:28:08.251954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-08 00:28:08.251967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-08 00:28:08.251979 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-08 00:28:08.251991 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-08 00:28:08.252030 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-08 00:28:08.252043 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-08 00:28:08.252056 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-08 00:28:08.252069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-08 00:28:08.252083 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-08 00:28:08.252096 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-08 00:28:08.252130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-08 00:28:08.252144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-08 00:28:08.252157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-08 00:28:08.252169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-08 00:28:08.252182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-08 00:28:08.252195 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-08 00:28:08.252208 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-08 00:28:08.252228 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-08 00:28:08.252239 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-08 00:28:08.252250 | orchestrator | 2025-09-08 00:28:08.252262 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-08 00:28:08.252273 | orchestrator | Monday 08 September 2025 00:28:05 +0000 (0:00:04.856) 0:03:28.867 ****** 2025-09-08 00:28:08.252283 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:08.252294 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:08.252305 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:08.252315 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:08.252326 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:08.252337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:08.252352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-08 00:28:08.252364 | orchestrator | 2025-09-08 00:28:08.252375 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-08 00:28:08.252386 | orchestrator | Monday 08 September 2025 00:28:06 +0000 (0:00:01.541) 0:03:30.409 ****** 2025-09-08 00:28:08.252396 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:08.252407 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:08.252418 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:08.252429 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:08.252440 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:08.252451 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-08 00:28:08.252461 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:08.252472 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:08.252483 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-08 00:28:08.252494 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-08 00:28:08.252505 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-08 00:28:08.252515 | orchestrator | 2025-09-08 00:28:08.252526 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-08 00:28:08.252537 | orchestrator | Monday 08 September 2025 00:28:07 +0000 (0:00:00.678) 0:03:31.088 ****** 2025-09-08 00:28:08.252547 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:08.252558 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:08.252569 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:08.252579 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:08.252590 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:08.252601 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:08.252612 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-08 00:28:08.252622 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:08.252653 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-08 00:28:08.252670 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-08 00:28:08.252688 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-08 00:28:08.252699 | orchestrator | 2025-09-08 00:28:08.252710 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-08 00:28:08.252721 | orchestrator | Monday 08 September 2025 00:28:07 +0000 (0:00:00.677) 0:03:31.766 ****** 2025-09-08 00:28:08.252731 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:08.252742 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:08.252753 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:08.252764 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:08.252775 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:08.252792 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:20.100248 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:20.100374 | orchestrator | 2025-09-08 00:28:20.100390 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-08 00:28:20.100403 | orchestrator | Monday 08 September 2025 00:28:08 +0000 (0:00:00.312) 0:03:32.078 ****** 2025-09-08 00:28:20.100414 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:20.100426 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:20.100437 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:20.100447 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:20.100458 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:20.100468 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:20.100479 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:20.100490 | orchestrator | 2025-09-08 00:28:20.100501 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-08 00:28:20.100512 | orchestrator | Monday 08 September 2025 00:28:14 +0000 (0:00:05.842) 0:03:37.921 ****** 2025-09-08 00:28:20.100523 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-08 00:28:20.100534 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-08 00:28:20.100544 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:20.100555 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-08 00:28:20.100565 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:20.100576 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-08 00:28:20.100586 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:20.100597 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-08 00:28:20.100607 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:20.100618 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:20.100667 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-08 00:28:20.100679 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:20.100690 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-08 00:28:20.100701 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:20.100712 | orchestrator | 2025-09-08 00:28:20.100723 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-08 00:28:20.100733 | orchestrator | Monday 08 September 2025 00:28:14 +0000 (0:00:00.312) 0:03:38.233 ****** 2025-09-08 00:28:20.100744 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-08 00:28:20.100755 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-08 00:28:20.100768 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-08 00:28:20.100780 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-08 00:28:20.100791 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-08 00:28:20.100803 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-08 00:28:20.100815 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-08 00:28:20.100827 | orchestrator | 2025-09-08 00:28:20.100840 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-08 00:28:20.100852 | orchestrator | Monday 08 September 2025 00:28:15 +0000 (0:00:01.026) 0:03:39.260 ****** 2025-09-08 00:28:20.100866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:28:20.100903 | orchestrator | 2025-09-08 00:28:20.100916 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-08 00:28:20.100928 | orchestrator | Monday 08 September 2025 00:28:15 +0000 (0:00:00.520) 0:03:39.780 ****** 2025-09-08 00:28:20.100940 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:20.100952 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:20.100964 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:20.100976 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:20.100989 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:20.101000 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:20.101013 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:20.101025 | orchestrator | 2025-09-08 00:28:20.101037 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-08 00:28:20.101050 | orchestrator | Monday 08 September 2025 00:28:17 +0000 (0:00:01.358) 0:03:41.139 ****** 2025-09-08 00:28:20.101063 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:20.101075 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:20.101087 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:20.101099 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:20.101111 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:20.101123 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:20.101133 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:20.101144 | orchestrator | 2025-09-08 00:28:20.101155 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-08 00:28:20.101165 | orchestrator | Monday 08 September 2025 00:28:17 +0000 (0:00:00.584) 0:03:41.723 ****** 2025-09-08 00:28:20.101176 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:20.101187 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:20.101198 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:20.101208 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:20.101219 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:20.101229 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:20.101240 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:20.101250 | orchestrator | 2025-09-08 00:28:20.101261 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-08 00:28:20.101287 | orchestrator | Monday 08 September 2025 00:28:18 +0000 (0:00:00.610) 0:03:42.334 ****** 2025-09-08 00:28:20.101298 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:20.101309 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:20.101319 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:20.101331 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:20.101341 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:20.101352 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:20.101363 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:20.101373 | orchestrator | 2025-09-08 00:28:20.101384 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-08 00:28:20.101395 | orchestrator | Monday 08 September 2025 00:28:19 +0000 (0:00:00.613) 0:03:42.948 ****** 2025-09-08 00:28:20.101431 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289886.159213, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:20.101447 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289894.9751334, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:20.101468 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289912.142801, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:20.101480 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289916.4273553, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:20.101491 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289902.8891635, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:20.101502 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289898.629889, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:20.101514 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1757289904.538198, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:20.101543 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:45.253101 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:45.253247 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:45.253263 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:45.253291 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:45.253303 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:45.253319 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 00:28:45.253332 | orchestrator | 2025-09-08 00:28:45.253345 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-08 00:28:45.253358 | orchestrator | Monday 08 September 2025 00:28:20 +0000 (0:00:00.967) 0:03:43.915 ****** 2025-09-08 00:28:45.253369 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:45.253381 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:45.253391 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:45.253402 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:45.253412 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:45.253423 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:45.253433 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:45.253444 | orchestrator | 2025-09-08 00:28:45.253456 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-08 00:28:45.253466 | orchestrator | Monday 08 September 2025 00:28:21 +0000 (0:00:01.134) 0:03:45.050 ****** 2025-09-08 00:28:45.253484 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:45.253495 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:45.253506 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:45.253516 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:45.253545 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:45.253557 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:45.253567 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:45.253578 | orchestrator | 2025-09-08 00:28:45.253588 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-08 00:28:45.253599 | orchestrator | Monday 08 September 2025 00:28:22 +0000 (0:00:01.161) 0:03:46.211 ****** 2025-09-08 00:28:45.253610 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:45.253620 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:45.253673 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:45.253686 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:45.253698 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:45.253710 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:45.253722 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:45.253734 | orchestrator | 2025-09-08 00:28:45.253747 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-08 00:28:45.253759 | orchestrator | Monday 08 September 2025 00:28:23 +0000 (0:00:01.164) 0:03:47.375 ****** 2025-09-08 00:28:45.253771 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:28:45.253783 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:28:45.253796 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:28:45.253808 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:28:45.253820 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:28:45.253831 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:28:45.253843 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:28:45.253856 | orchestrator | 2025-09-08 00:28:45.253868 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-08 00:28:45.253880 | orchestrator | Monday 08 September 2025 00:28:23 +0000 (0:00:00.298) 0:03:47.674 ****** 2025-09-08 00:28:45.253892 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:45.253905 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:45.253917 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:45.253930 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:45.253942 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:45.253955 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:45.253966 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:45.253976 | orchestrator | 2025-09-08 00:28:45.253987 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-08 00:28:45.253998 | orchestrator | Monday 08 September 2025 00:28:24 +0000 (0:00:00.783) 0:03:48.457 ****** 2025-09-08 00:28:45.254010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:28:45.254078 | orchestrator | 2025-09-08 00:28:45.254090 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-08 00:28:45.254100 | orchestrator | Monday 08 September 2025 00:28:25 +0000 (0:00:00.407) 0:03:48.864 ****** 2025-09-08 00:28:45.254111 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:45.254122 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:45.254132 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:45.254143 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:45.254154 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:45.254164 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:45.254175 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:45.254186 | orchestrator | 2025-09-08 00:28:45.254196 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-08 00:28:45.254207 | orchestrator | Monday 08 September 2025 00:28:33 +0000 (0:00:08.215) 0:03:57.079 ****** 2025-09-08 00:28:45.254218 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:45.254236 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:45.254247 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:45.254257 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:45.254268 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:45.254278 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:45.254288 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:45.254299 | orchestrator | 2025-09-08 00:28:45.254310 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-08 00:28:45.254320 | orchestrator | Monday 08 September 2025 00:28:34 +0000 (0:00:01.294) 0:03:58.374 ****** 2025-09-08 00:28:45.254331 | orchestrator | ok: [testbed-manager] 2025-09-08 00:28:45.254341 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:28:45.254352 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:28:45.254362 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:28:45.254372 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:28:45.254382 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:28:45.254393 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:28:45.254403 | orchestrator | 2025-09-08 00:28:45.254414 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-08 00:28:45.254424 | orchestrator | Monday 08 September 2025 00:28:35 +0000 (0:00:01.022) 0:03:59.396 ****** 2025-09-08 00:28:45.254441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:28:45.254452 | orchestrator | 2025-09-08 00:28:45.254463 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-08 00:28:45.254473 | orchestrator | Monday 08 September 2025 00:28:36 +0000 (0:00:00.506) 0:03:59.902 ****** 2025-09-08 00:28:45.254484 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:45.254494 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:45.254505 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:28:45.254515 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:28:45.254526 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:28:45.254537 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:28:45.254547 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:45.254558 | orchestrator | 2025-09-08 00:28:45.254569 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-08 00:28:45.254579 | orchestrator | Monday 08 September 2025 00:28:44 +0000 (0:00:08.529) 0:04:08.432 ****** 2025-09-08 00:28:45.254590 | orchestrator | changed: [testbed-manager] 2025-09-08 00:28:45.254600 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:28:45.254611 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:28:45.254647 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:55.157992 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:55.158211 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:55.158236 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:55.158256 | orchestrator | 2025-09-08 00:29:55.158279 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-08 00:29:55.158300 | orchestrator | Monday 08 September 2025 00:28:45 +0000 (0:00:00.643) 0:04:09.075 ****** 2025-09-08 00:29:55.158319 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:55.158339 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:55.158358 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:55.158378 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:55.158398 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:55.158418 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:55.158437 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:55.158457 | orchestrator | 2025-09-08 00:29:55.158478 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-08 00:29:55.158500 | orchestrator | Monday 08 September 2025 00:28:46 +0000 (0:00:01.231) 0:04:10.307 ****** 2025-09-08 00:29:55.158521 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:55.158542 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:55.158598 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:55.158619 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:55.158665 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:55.158685 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:55.158704 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:55.158723 | orchestrator | 2025-09-08 00:29:55.158743 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-08 00:29:55.158762 | orchestrator | Monday 08 September 2025 00:28:47 +0000 (0:00:01.032) 0:04:11.339 ****** 2025-09-08 00:29:55.158782 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:55.158802 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:55.158822 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:55.158841 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:55.158859 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:55.158877 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:55.158895 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:55.158913 | orchestrator | 2025-09-08 00:29:55.158931 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-08 00:29:55.158951 | orchestrator | Monday 08 September 2025 00:28:47 +0000 (0:00:00.271) 0:04:11.611 ****** 2025-09-08 00:29:55.158968 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:55.158987 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:55.159005 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:55.159022 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:55.159039 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:55.159057 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:55.159076 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:55.159094 | orchestrator | 2025-09-08 00:29:55.159113 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-08 00:29:55.159130 | orchestrator | Monday 08 September 2025 00:28:48 +0000 (0:00:00.312) 0:04:11.924 ****** 2025-09-08 00:29:55.159148 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:55.159166 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:55.159184 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:55.159202 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:55.159218 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:55.159236 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:55.159254 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:55.159272 | orchestrator | 2025-09-08 00:29:55.159290 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-08 00:29:55.159309 | orchestrator | Monday 08 September 2025 00:28:48 +0000 (0:00:00.326) 0:04:12.251 ****** 2025-09-08 00:29:55.159327 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:55.159345 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:55.159363 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:55.159381 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:55.159399 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:55.159416 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:55.159434 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:55.159451 | orchestrator | 2025-09-08 00:29:55.159468 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-08 00:29:55.159485 | orchestrator | Monday 08 September 2025 00:28:54 +0000 (0:00:05.933) 0:04:18.185 ****** 2025-09-08 00:29:55.159505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:29:55.159525 | orchestrator | 2025-09-08 00:29:55.159543 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-08 00:29:55.159562 | orchestrator | Monday 08 September 2025 00:28:54 +0000 (0:00:00.400) 0:04:18.586 ****** 2025-09-08 00:29:55.159580 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-08 00:29:55.159598 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-08 00:29:55.159617 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-08 00:29:55.159726 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-08 00:29:55.159759 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:29:55.159777 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-08 00:29:55.159794 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-08 00:29:55.159811 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:29:55.159827 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-08 00:29:55.159843 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-08 00:29:55.159858 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:29:55.159880 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:29:55.159905 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-08 00:29:55.159927 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-08 00:29:55.159944 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-08 00:29:55.159960 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-08 00:29:55.159976 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:29:55.160019 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:29:55.160036 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-08 00:29:55.160052 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-08 00:29:55.160068 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:29:55.160084 | orchestrator | 2025-09-08 00:29:55.160100 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-08 00:29:55.160117 | orchestrator | Monday 08 September 2025 00:28:55 +0000 (0:00:00.351) 0:04:18.938 ****** 2025-09-08 00:29:55.160134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:29:55.160150 | orchestrator | 2025-09-08 00:29:55.160165 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-08 00:29:55.160181 | orchestrator | Monday 08 September 2025 00:28:55 +0000 (0:00:00.406) 0:04:19.344 ****** 2025-09-08 00:29:55.160197 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-08 00:29:55.160213 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:29:55.160229 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-08 00:29:55.160245 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-08 00:29:55.160261 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:29:55.160276 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:29:55.160292 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-08 00:29:55.160308 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-08 00:29:55.160324 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:29:55.160339 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-08 00:29:55.160355 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:29:55.160371 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:29:55.160387 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-08 00:29:55.160402 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:29:55.160418 | orchestrator | 2025-09-08 00:29:55.160434 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-08 00:29:55.160449 | orchestrator | Monday 08 September 2025 00:28:55 +0000 (0:00:00.351) 0:04:19.696 ****** 2025-09-08 00:29:55.160465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:29:55.160482 | orchestrator | 2025-09-08 00:29:55.160498 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-08 00:29:55.160526 | orchestrator | Monday 08 September 2025 00:28:56 +0000 (0:00:00.588) 0:04:20.284 ****** 2025-09-08 00:29:55.160541 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:55.160557 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:55.160573 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:55.160589 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:55.160605 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:55.160621 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:55.160700 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:55.160716 | orchestrator | 2025-09-08 00:29:55.160732 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-08 00:29:55.160749 | orchestrator | Monday 08 September 2025 00:29:32 +0000 (0:00:35.713) 0:04:55.998 ****** 2025-09-08 00:29:55.160764 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:55.160779 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:55.160794 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:55.160810 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:55.160826 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:55.160842 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:55.160858 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:55.160873 | orchestrator | 2025-09-08 00:29:55.160889 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-08 00:29:55.160906 | orchestrator | Monday 08 September 2025 00:29:40 +0000 (0:00:08.028) 0:05:04.027 ****** 2025-09-08 00:29:55.160922 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:55.160938 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:55.160954 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:55.160970 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:55.160986 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:55.161002 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:55.161018 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:55.161033 | orchestrator | 2025-09-08 00:29:55.161049 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-08 00:29:55.161065 | orchestrator | Monday 08 September 2025 00:29:47 +0000 (0:00:07.346) 0:05:11.374 ****** 2025-09-08 00:29:55.161081 | orchestrator | ok: [testbed-manager] 2025-09-08 00:29:55.161098 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:29:55.161114 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:29:55.161130 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:29:55.161145 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:29:55.161162 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:29:55.161177 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:29:55.161193 | orchestrator | 2025-09-08 00:29:55.161209 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-08 00:29:55.161226 | orchestrator | Monday 08 September 2025 00:29:49 +0000 (0:00:01.730) 0:05:13.104 ****** 2025-09-08 00:29:55.161242 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:29:55.161259 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:29:55.161275 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:29:55.161291 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:29:55.161307 | orchestrator | changed: [testbed-manager] 2025-09-08 00:29:55.161323 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:29:55.161339 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:29:55.161354 | orchestrator | 2025-09-08 00:29:55.161370 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-08 00:29:55.161399 | orchestrator | Monday 08 September 2025 00:29:55 +0000 (0:00:05.864) 0:05:18.969 ****** 2025-09-08 00:30:06.525663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:30:06.525757 | orchestrator | 2025-09-08 00:30:06.525771 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-08 00:30:06.525805 | orchestrator | Monday 08 September 2025 00:29:55 +0000 (0:00:00.455) 0:05:19.424 ****** 2025-09-08 00:30:06.525815 | orchestrator | changed: [testbed-manager] 2025-09-08 00:30:06.525825 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:06.525835 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:06.525844 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:06.525854 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:06.525863 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:06.525873 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:06.525882 | orchestrator | 2025-09-08 00:30:06.525892 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-08 00:30:06.525902 | orchestrator | Monday 08 September 2025 00:29:56 +0000 (0:00:00.746) 0:05:20.171 ****** 2025-09-08 00:30:06.525911 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:06.525921 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:06.525930 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:06.525940 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:06.525949 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:06.525959 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:06.525968 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:06.525977 | orchestrator | 2025-09-08 00:30:06.525987 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-08 00:30:06.525996 | orchestrator | Monday 08 September 2025 00:29:58 +0000 (0:00:01.732) 0:05:21.903 ****** 2025-09-08 00:30:06.526006 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:30:06.526059 | orchestrator | changed: [testbed-manager] 2025-09-08 00:30:06.526070 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:30:06.526080 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:30:06.526089 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:30:06.526098 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:30:06.526108 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:30:06.526117 | orchestrator | 2025-09-08 00:30:06.526127 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-08 00:30:06.526137 | orchestrator | Monday 08 September 2025 00:29:58 +0000 (0:00:00.805) 0:05:22.709 ****** 2025-09-08 00:30:06.526146 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:06.526155 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:06.526165 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:06.526174 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:06.526183 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:06.526193 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:06.526204 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:06.526216 | orchestrator | 2025-09-08 00:30:06.526227 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-08 00:30:06.526238 | orchestrator | Monday 08 September 2025 00:29:59 +0000 (0:00:00.323) 0:05:23.032 ****** 2025-09-08 00:30:06.526250 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:06.526260 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:06.526272 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:06.526283 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:06.526293 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:06.526304 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:06.526315 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:06.526326 | orchestrator | 2025-09-08 00:30:06.526338 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-08 00:30:06.526350 | orchestrator | Monday 08 September 2025 00:29:59 +0000 (0:00:00.409) 0:05:23.441 ****** 2025-09-08 00:30:06.526361 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:06.526372 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:06.526383 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:06.526395 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:06.526407 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:06.526418 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:06.526429 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:06.526448 | orchestrator | 2025-09-08 00:30:06.526474 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-08 00:30:06.526486 | orchestrator | Monday 08 September 2025 00:29:59 +0000 (0:00:00.303) 0:05:23.745 ****** 2025-09-08 00:30:06.526497 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:06.526508 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:06.526519 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:06.526530 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:06.526542 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:06.526554 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:06.526563 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:06.526572 | orchestrator | 2025-09-08 00:30:06.526582 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-08 00:30:06.526597 | orchestrator | Monday 08 September 2025 00:30:00 +0000 (0:00:00.279) 0:05:24.025 ****** 2025-09-08 00:30:06.526607 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:06.526617 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:06.526643 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:06.526653 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:06.526663 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:06.526672 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:06.526682 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:06.526691 | orchestrator | 2025-09-08 00:30:06.526701 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-08 00:30:06.526710 | orchestrator | Monday 08 September 2025 00:30:00 +0000 (0:00:00.315) 0:05:24.340 ****** 2025-09-08 00:30:06.526720 | orchestrator | ok: [testbed-manager] =>  2025-09-08 00:30:06.526730 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:06.526739 | orchestrator | ok: [testbed-node-0] =>  2025-09-08 00:30:06.526749 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:06.526758 | orchestrator | ok: [testbed-node-1] =>  2025-09-08 00:30:06.526768 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:06.526777 | orchestrator | ok: [testbed-node-2] =>  2025-09-08 00:30:06.526787 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:06.526796 | orchestrator | ok: [testbed-node-3] =>  2025-09-08 00:30:06.526806 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:06.526829 | orchestrator | ok: [testbed-node-4] =>  2025-09-08 00:30:06.526840 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:06.526849 | orchestrator | ok: [testbed-node-5] =>  2025-09-08 00:30:06.526859 | orchestrator |  docker_version: 5:27.5.1 2025-09-08 00:30:06.526868 | orchestrator | 2025-09-08 00:30:06.526878 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-08 00:30:06.526888 | orchestrator | Monday 08 September 2025 00:30:00 +0000 (0:00:00.300) 0:05:24.641 ****** 2025-09-08 00:30:06.526897 | orchestrator | ok: [testbed-manager] =>  2025-09-08 00:30:06.526907 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:06.526916 | orchestrator | ok: [testbed-node-0] =>  2025-09-08 00:30:06.526926 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:06.526935 | orchestrator | ok: [testbed-node-1] =>  2025-09-08 00:30:06.526945 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:06.526954 | orchestrator | ok: [testbed-node-2] =>  2025-09-08 00:30:06.526964 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:06.526973 | orchestrator | ok: [testbed-node-3] =>  2025-09-08 00:30:06.526983 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:06.526992 | orchestrator | ok: [testbed-node-4] =>  2025-09-08 00:30:06.527001 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:06.527011 | orchestrator | ok: [testbed-node-5] =>  2025-09-08 00:30:06.527020 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-08 00:30:06.527030 | orchestrator | 2025-09-08 00:30:06.527040 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-08 00:30:06.527049 | orchestrator | Monday 08 September 2025 00:30:01 +0000 (0:00:00.434) 0:05:25.075 ****** 2025-09-08 00:30:06.527059 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:06.527074 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:06.527084 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:06.527093 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:06.527103 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:06.527112 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:06.527122 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:06.527131 | orchestrator | 2025-09-08 00:30:06.527141 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-08 00:30:06.527151 | orchestrator | Monday 08 September 2025 00:30:01 +0000 (0:00:00.303) 0:05:25.379 ****** 2025-09-08 00:30:06.527161 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:06.527170 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:06.527191 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:06.527201 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:06.527211 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:30:06.527220 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:30:06.527230 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:30:06.527239 | orchestrator | 2025-09-08 00:30:06.527249 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-08 00:30:06.527259 | orchestrator | Monday 08 September 2025 00:30:01 +0000 (0:00:00.272) 0:05:25.651 ****** 2025-09-08 00:30:06.527270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:30:06.527280 | orchestrator | 2025-09-08 00:30:06.527290 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-08 00:30:06.527300 | orchestrator | Monday 08 September 2025 00:30:02 +0000 (0:00:00.436) 0:05:26.088 ****** 2025-09-08 00:30:06.527310 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:06.527319 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:06.527329 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:06.527338 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:06.527348 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:06.527357 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:06.527367 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:06.527376 | orchestrator | 2025-09-08 00:30:06.527386 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-08 00:30:06.527396 | orchestrator | Monday 08 September 2025 00:30:03 +0000 (0:00:00.827) 0:05:26.915 ****** 2025-09-08 00:30:06.527405 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:30:06.527415 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:30:06.527424 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:30:06.527433 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:30:06.527443 | orchestrator | ok: [testbed-manager] 2025-09-08 00:30:06.527452 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:30:06.527462 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:30:06.527472 | orchestrator | 2025-09-08 00:30:06.527482 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-08 00:30:06.527492 | orchestrator | Monday 08 September 2025 00:30:05 +0000 (0:00:02.830) 0:05:29.746 ****** 2025-09-08 00:30:06.527502 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-08 00:30:06.527512 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-08 00:30:06.527521 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-08 00:30:06.527535 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-08 00:30:06.527545 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-08 00:30:06.527555 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-08 00:30:06.527564 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:30:06.527574 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-08 00:30:06.527583 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-08 00:30:06.527593 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:30:06.527608 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-08 00:30:06.527617 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-08 00:30:06.527639 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-08 00:30:06.527649 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-08 00:30:06.527658 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:30:06.527668 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-08 00:30:06.527678 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-08 00:30:06.527687 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:30:06.527702 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-08 00:31:06.504576 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-08 00:31:06.504755 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-08 00:31:06.504772 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-08 00:31:06.504784 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:06.504797 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:06.504808 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-08 00:31:06.504819 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-08 00:31:06.504829 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-08 00:31:06.504840 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:06.504851 | orchestrator | 2025-09-08 00:31:06.504864 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-08 00:31:06.504876 | orchestrator | Monday 08 September 2025 00:30:06 +0000 (0:00:00.752) 0:05:30.498 ****** 2025-09-08 00:31:06.504887 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.504897 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.504908 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.504919 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.504929 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.504940 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.504950 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.504961 | orchestrator | 2025-09-08 00:31:06.504972 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-08 00:31:06.504983 | orchestrator | Monday 08 September 2025 00:30:13 +0000 (0:00:06.390) 0:05:36.888 ****** 2025-09-08 00:31:06.504993 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.505004 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.505014 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.505025 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.505036 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.505046 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.505058 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.505069 | orchestrator | 2025-09-08 00:31:06.505079 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-08 00:31:06.505090 | orchestrator | Monday 08 September 2025 00:30:14 +0000 (0:00:01.063) 0:05:37.952 ****** 2025-09-08 00:31:06.505103 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.505116 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.505128 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.505141 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.505154 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.505167 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.505180 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.505193 | orchestrator | 2025-09-08 00:31:06.505206 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-08 00:31:06.505219 | orchestrator | Monday 08 September 2025 00:30:22 +0000 (0:00:08.153) 0:05:46.106 ****** 2025-09-08 00:31:06.505232 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:06.505245 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.505257 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.505298 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.505311 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.505323 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.505335 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.505348 | orchestrator | 2025-09-08 00:31:06.505362 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-08 00:31:06.505374 | orchestrator | Monday 08 September 2025 00:30:25 +0000 (0:00:03.549) 0:05:49.656 ****** 2025-09-08 00:31:06.505387 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.505399 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.505411 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.505424 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.505437 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.505450 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.505461 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.505471 | orchestrator | 2025-09-08 00:31:06.505482 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-08 00:31:06.505493 | orchestrator | Monday 08 September 2025 00:30:27 +0000 (0:00:01.521) 0:05:51.177 ****** 2025-09-08 00:31:06.505504 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.505515 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.505525 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.505536 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.505546 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.505557 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.505567 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.505578 | orchestrator | 2025-09-08 00:31:06.505588 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-08 00:31:06.505599 | orchestrator | Monday 08 September 2025 00:30:28 +0000 (0:00:01.362) 0:05:52.539 ****** 2025-09-08 00:31:06.505610 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:06.505655 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:06.505668 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:06.505679 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:06.505689 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:06.505700 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:06.505710 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:06.505721 | orchestrator | 2025-09-08 00:31:06.505732 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-08 00:31:06.505743 | orchestrator | Monday 08 September 2025 00:30:29 +0000 (0:00:00.598) 0:05:53.137 ****** 2025-09-08 00:31:06.505754 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.505764 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.505775 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.505786 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.505796 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.505807 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.505817 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.505828 | orchestrator | 2025-09-08 00:31:06.505839 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-08 00:31:06.505849 | orchestrator | Monday 08 September 2025 00:30:39 +0000 (0:00:10.085) 0:06:03.223 ****** 2025-09-08 00:31:06.505860 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:06.505871 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.505901 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.505912 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.505923 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.505934 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.505945 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.505956 | orchestrator | 2025-09-08 00:31:06.505966 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-08 00:31:06.505977 | orchestrator | Monday 08 September 2025 00:30:40 +0000 (0:00:00.907) 0:06:04.131 ****** 2025-09-08 00:31:06.505997 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.506008 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.506077 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.506089 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.506099 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.506110 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.506121 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.506131 | orchestrator | 2025-09-08 00:31:06.506142 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-08 00:31:06.506153 | orchestrator | Monday 08 September 2025 00:30:49 +0000 (0:00:09.032) 0:06:13.164 ****** 2025-09-08 00:31:06.506164 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.506174 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.506185 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.506196 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.506206 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.506217 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.506228 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.506239 | orchestrator | 2025-09-08 00:31:06.506249 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-08 00:31:06.506260 | orchestrator | Monday 08 September 2025 00:31:00 +0000 (0:00:10.764) 0:06:23.929 ****** 2025-09-08 00:31:06.506271 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-08 00:31:06.506281 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-08 00:31:06.506292 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-08 00:31:06.506303 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-08 00:31:06.506314 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-08 00:31:06.506324 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-08 00:31:06.506335 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-08 00:31:06.506346 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-08 00:31:06.506356 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-08 00:31:06.506367 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-08 00:31:06.506378 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-08 00:31:06.506388 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-08 00:31:06.506399 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-08 00:31:06.506410 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-08 00:31:06.506420 | orchestrator | 2025-09-08 00:31:06.506431 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-08 00:31:06.506442 | orchestrator | Monday 08 September 2025 00:31:01 +0000 (0:00:01.215) 0:06:25.144 ****** 2025-09-08 00:31:06.506452 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:06.506463 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:06.506474 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:06.506484 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:06.506495 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:06.506506 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:06.506516 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:06.506527 | orchestrator | 2025-09-08 00:31:06.506538 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-08 00:31:06.506549 | orchestrator | Monday 08 September 2025 00:31:01 +0000 (0:00:00.530) 0:06:25.675 ****** 2025-09-08 00:31:06.506560 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:06.506570 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:06.506581 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:06.506592 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:06.506602 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:06.506613 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:06.506623 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:06.506650 | orchestrator | 2025-09-08 00:31:06.506662 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-08 00:31:06.506682 | orchestrator | Monday 08 September 2025 00:31:05 +0000 (0:00:03.710) 0:06:29.385 ****** 2025-09-08 00:31:06.506693 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:06.506703 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:06.506714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:06.506725 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:06.506735 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:06.506746 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:06.506762 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:06.506774 | orchestrator | 2025-09-08 00:31:06.506785 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-08 00:31:06.506796 | orchestrator | Monday 08 September 2025 00:31:06 +0000 (0:00:00.534) 0:06:29.919 ****** 2025-09-08 00:31:06.506807 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-08 00:31:06.506818 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-08 00:31:06.506829 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:06.506840 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-08 00:31:06.506850 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-08 00:31:06.506861 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:06.506872 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-08 00:31:06.506883 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-08 00:31:06.506893 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:06.506904 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-08 00:31:06.506915 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-08 00:31:06.506933 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:25.775470 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-08 00:31:25.775597 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-08 00:31:25.775613 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:25.775625 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-08 00:31:25.775637 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-08 00:31:25.775701 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:25.775713 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-08 00:31:25.775724 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-08 00:31:25.775735 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:25.775747 | orchestrator | 2025-09-08 00:31:25.775760 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-08 00:31:25.775772 | orchestrator | Monday 08 September 2025 00:31:06 +0000 (0:00:00.594) 0:06:30.514 ****** 2025-09-08 00:31:25.775783 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:25.775794 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:25.775805 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:25.775816 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:25.775827 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:25.775838 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:25.775849 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:25.775860 | orchestrator | 2025-09-08 00:31:25.775871 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-08 00:31:25.775882 | orchestrator | Monday 08 September 2025 00:31:07 +0000 (0:00:00.509) 0:06:31.023 ****** 2025-09-08 00:31:25.775893 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:25.775903 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:25.775914 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:25.775925 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:25.775936 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:25.775946 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:25.775987 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:25.775999 | orchestrator | 2025-09-08 00:31:25.776013 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-08 00:31:25.776026 | orchestrator | Monday 08 September 2025 00:31:07 +0000 (0:00:00.515) 0:06:31.539 ****** 2025-09-08 00:31:25.776038 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:25.776051 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:25.776064 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:25.776076 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:25.776088 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:25.776100 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:25.776112 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:25.776125 | orchestrator | 2025-09-08 00:31:25.776137 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-08 00:31:25.776150 | orchestrator | Monday 08 September 2025 00:31:08 +0000 (0:00:00.722) 0:06:32.262 ****** 2025-09-08 00:31:25.776162 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.776175 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:25.776187 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:25.776199 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:25.776211 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:25.776224 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:25.776236 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:25.776250 | orchestrator | 2025-09-08 00:31:25.776262 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-08 00:31:25.776275 | orchestrator | Monday 08 September 2025 00:31:10 +0000 (0:00:01.689) 0:06:33.951 ****** 2025-09-08 00:31:25.776288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:25.776303 | orchestrator | 2025-09-08 00:31:25.776316 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-08 00:31:25.776329 | orchestrator | Monday 08 September 2025 00:31:11 +0000 (0:00:00.904) 0:06:34.855 ****** 2025-09-08 00:31:25.776342 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.776354 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:25.776365 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:25.776375 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:25.776386 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:25.776396 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:25.776407 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:25.776418 | orchestrator | 2025-09-08 00:31:25.776429 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-08 00:31:25.776439 | orchestrator | Monday 08 September 2025 00:31:11 +0000 (0:00:00.815) 0:06:35.671 ****** 2025-09-08 00:31:25.776450 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.776461 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:25.776472 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:25.776482 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:25.776493 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:25.776503 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:25.776514 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:25.776525 | orchestrator | 2025-09-08 00:31:25.776536 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-08 00:31:25.776546 | orchestrator | Monday 08 September 2025 00:31:12 +0000 (0:00:01.089) 0:06:36.760 ****** 2025-09-08 00:31:25.776557 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.776567 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:25.776578 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:25.776588 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:25.776599 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:25.776609 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:25.776620 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:25.776638 | orchestrator | 2025-09-08 00:31:25.776666 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-08 00:31:25.776677 | orchestrator | Monday 08 September 2025 00:31:14 +0000 (0:00:01.365) 0:06:38.125 ****** 2025-09-08 00:31:25.776688 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:25.776717 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:25.776729 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:25.776740 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:25.776750 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:25.776761 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:25.776771 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:25.776782 | orchestrator | 2025-09-08 00:31:25.776793 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-08 00:31:25.776804 | orchestrator | Monday 08 September 2025 00:31:15 +0000 (0:00:01.412) 0:06:39.537 ****** 2025-09-08 00:31:25.776814 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.776825 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:25.776836 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:25.776846 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:25.776857 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:25.776867 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:25.776878 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:25.776889 | orchestrator | 2025-09-08 00:31:25.776899 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-08 00:31:25.776910 | orchestrator | Monday 08 September 2025 00:31:16 +0000 (0:00:01.294) 0:06:40.832 ****** 2025-09-08 00:31:25.776921 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:25.776931 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:25.776942 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:25.776952 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:25.776963 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:25.776973 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:25.776984 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:25.776994 | orchestrator | 2025-09-08 00:31:25.777005 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-08 00:31:25.777016 | orchestrator | Monday 08 September 2025 00:31:18 +0000 (0:00:01.579) 0:06:42.411 ****** 2025-09-08 00:31:25.777045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:25.777057 | orchestrator | 2025-09-08 00:31:25.777068 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-08 00:31:25.777079 | orchestrator | Monday 08 September 2025 00:31:19 +0000 (0:00:00.898) 0:06:43.309 ****** 2025-09-08 00:31:25.777090 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.777100 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:25.777111 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:25.777121 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:25.777132 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:25.777143 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:25.777153 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:25.777164 | orchestrator | 2025-09-08 00:31:25.777175 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-08 00:31:25.777185 | orchestrator | Monday 08 September 2025 00:31:20 +0000 (0:00:01.464) 0:06:44.774 ****** 2025-09-08 00:31:25.777196 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.777207 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:25.777218 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:25.777228 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:25.777239 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:25.777250 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:25.777260 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:25.777271 | orchestrator | 2025-09-08 00:31:25.777282 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-08 00:31:25.777300 | orchestrator | Monday 08 September 2025 00:31:22 +0000 (0:00:01.138) 0:06:45.913 ****** 2025-09-08 00:31:25.777312 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.777322 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:25.777333 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:25.777343 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:25.777354 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:25.777365 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:25.777375 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:25.777386 | orchestrator | 2025-09-08 00:31:25.777397 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-08 00:31:25.777408 | orchestrator | Monday 08 September 2025 00:31:23 +0000 (0:00:01.357) 0:06:47.271 ****** 2025-09-08 00:31:25.777418 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:25.777429 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:25.777440 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:25.777450 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:25.777461 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:25.777471 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:25.777482 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:25.777492 | orchestrator | 2025-09-08 00:31:25.777503 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-08 00:31:25.777514 | orchestrator | Monday 08 September 2025 00:31:24 +0000 (0:00:01.110) 0:06:48.381 ****** 2025-09-08 00:31:25.777530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:25.777541 | orchestrator | 2025-09-08 00:31:25.777552 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:25.777563 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.885) 0:06:49.267 ****** 2025-09-08 00:31:25.777573 | orchestrator | 2025-09-08 00:31:25.777584 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:25.777595 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.040) 0:06:49.307 ****** 2025-09-08 00:31:25.777605 | orchestrator | 2025-09-08 00:31:25.777616 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:25.777627 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.055) 0:06:49.362 ****** 2025-09-08 00:31:25.777638 | orchestrator | 2025-09-08 00:31:25.777664 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:25.777675 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.040) 0:06:49.403 ****** 2025-09-08 00:31:25.777686 | orchestrator | 2025-09-08 00:31:25.777696 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:25.777714 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.045) 0:06:49.448 ****** 2025-09-08 00:31:52.204114 | orchestrator | 2025-09-08 00:31:52.204240 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:52.204257 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.051) 0:06:49.499 ****** 2025-09-08 00:31:52.204269 | orchestrator | 2025-09-08 00:31:52.204280 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-08 00:31:52.204291 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.045) 0:06:49.544 ****** 2025-09-08 00:31:52.204302 | orchestrator | 2025-09-08 00:31:52.204313 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-08 00:31:52.204324 | orchestrator | Monday 08 September 2025 00:31:25 +0000 (0:00:00.040) 0:06:49.585 ****** 2025-09-08 00:31:52.204335 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:52.204347 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:52.204358 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:52.204368 | orchestrator | 2025-09-08 00:31:52.204379 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-08 00:31:52.204426 | orchestrator | Monday 08 September 2025 00:31:27 +0000 (0:00:01.373) 0:06:50.959 ****** 2025-09-08 00:31:52.204438 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:52.204449 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:52.204460 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:52.204470 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:52.204481 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:52.204491 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:52.204502 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:52.204512 | orchestrator | 2025-09-08 00:31:52.204523 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-08 00:31:52.204534 | orchestrator | Monday 08 September 2025 00:31:28 +0000 (0:00:01.479) 0:06:52.438 ****** 2025-09-08 00:31:52.204545 | orchestrator | changed: [testbed-manager] 2025-09-08 00:31:52.204555 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:52.204566 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:52.204576 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:52.204587 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:52.204597 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:52.204647 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:52.204660 | orchestrator | 2025-09-08 00:31:52.204673 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-08 00:31:52.204686 | orchestrator | Monday 08 September 2025 00:31:29 +0000 (0:00:01.168) 0:06:53.606 ****** 2025-09-08 00:31:52.204698 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:52.204711 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:52.204723 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:52.204735 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:52.204748 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:52.204762 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:52.204774 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:52.204786 | orchestrator | 2025-09-08 00:31:52.204799 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-08 00:31:52.204812 | orchestrator | Monday 08 September 2025 00:31:32 +0000 (0:00:02.634) 0:06:56.241 ****** 2025-09-08 00:31:52.204825 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:52.204835 | orchestrator | 2025-09-08 00:31:52.204846 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-08 00:31:52.204857 | orchestrator | Monday 08 September 2025 00:31:32 +0000 (0:00:00.105) 0:06:56.346 ****** 2025-09-08 00:31:52.204867 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:52.204878 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:52.204889 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:52.204899 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:52.204910 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:52.204920 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:52.204931 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:52.204941 | orchestrator | 2025-09-08 00:31:52.204952 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-08 00:31:52.204964 | orchestrator | Monday 08 September 2025 00:31:33 +0000 (0:00:00.973) 0:06:57.320 ****** 2025-09-08 00:31:52.204974 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:52.204985 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:52.204995 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:52.205006 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:52.205016 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:52.205027 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:52.205037 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:52.205048 | orchestrator | 2025-09-08 00:31:52.205058 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-08 00:31:52.205069 | orchestrator | Monday 08 September 2025 00:31:34 +0000 (0:00:00.729) 0:06:58.049 ****** 2025-09-08 00:31:52.205097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:52.205123 | orchestrator | 2025-09-08 00:31:52.205134 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-08 00:31:52.205145 | orchestrator | Monday 08 September 2025 00:31:35 +0000 (0:00:00.917) 0:06:58.966 ****** 2025-09-08 00:31:52.205156 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:52.205166 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:52.205177 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:52.205188 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:52.205198 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:52.205209 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:52.205219 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:52.205230 | orchestrator | 2025-09-08 00:31:52.205240 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-08 00:31:52.205251 | orchestrator | Monday 08 September 2025 00:31:35 +0000 (0:00:00.843) 0:06:59.810 ****** 2025-09-08 00:31:52.205262 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-08 00:31:52.205272 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-08 00:31:52.205283 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-08 00:31:52.205311 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-08 00:31:52.205323 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-08 00:31:52.205333 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-08 00:31:52.205344 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-08 00:31:52.205355 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-08 00:31:52.205366 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-08 00:31:52.205377 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-08 00:31:52.205387 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-08 00:31:52.205398 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-08 00:31:52.205409 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-08 00:31:52.205419 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-08 00:31:52.205430 | orchestrator | 2025-09-08 00:31:52.205440 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-08 00:31:52.205451 | orchestrator | Monday 08 September 2025 00:31:38 +0000 (0:00:02.689) 0:07:02.499 ****** 2025-09-08 00:31:52.205461 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:52.205472 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:52.205482 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:52.205493 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:52.205503 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:52.205514 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:52.205524 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:52.205535 | orchestrator | 2025-09-08 00:31:52.205545 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-08 00:31:52.205556 | orchestrator | Monday 08 September 2025 00:31:39 +0000 (0:00:00.504) 0:07:03.004 ****** 2025-09-08 00:31:52.205568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:31:52.205581 | orchestrator | 2025-09-08 00:31:52.205591 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-08 00:31:52.205621 | orchestrator | Monday 08 September 2025 00:31:40 +0000 (0:00:00.836) 0:07:03.841 ****** 2025-09-08 00:31:52.205633 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:52.205643 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:52.205654 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:52.205671 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:52.205682 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:52.205692 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:52.205703 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:52.205713 | orchestrator | 2025-09-08 00:31:52.205724 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-08 00:31:52.205735 | orchestrator | Monday 08 September 2025 00:31:41 +0000 (0:00:01.067) 0:07:04.908 ****** 2025-09-08 00:31:52.205745 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:52.205756 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:52.205766 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:52.205777 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:52.205787 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:52.205798 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:52.205808 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:52.205819 | orchestrator | 2025-09-08 00:31:52.205829 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-08 00:31:52.205840 | orchestrator | Monday 08 September 2025 00:31:41 +0000 (0:00:00.861) 0:07:05.769 ****** 2025-09-08 00:31:52.205850 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:52.205861 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:52.205872 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:52.205882 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:52.205893 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:52.205903 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:52.205914 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:52.205924 | orchestrator | 2025-09-08 00:31:52.205935 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-08 00:31:52.205946 | orchestrator | Monday 08 September 2025 00:31:42 +0000 (0:00:00.523) 0:07:06.293 ****** 2025-09-08 00:31:52.205956 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:52.205967 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:31:52.205977 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:31:52.205988 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:31:52.205998 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:31:52.206009 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:31:52.206101 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:31:52.206113 | orchestrator | 2025-09-08 00:31:52.206130 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-08 00:31:52.206141 | orchestrator | Monday 08 September 2025 00:31:43 +0000 (0:00:01.505) 0:07:07.798 ****** 2025-09-08 00:31:52.206152 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:31:52.206162 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:31:52.206173 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:31:52.206184 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:31:52.206194 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:31:52.206204 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:31:52.206215 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:31:52.206225 | orchestrator | 2025-09-08 00:31:52.206236 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-08 00:31:52.206246 | orchestrator | Monday 08 September 2025 00:31:44 +0000 (0:00:00.497) 0:07:08.296 ****** 2025-09-08 00:31:52.206257 | orchestrator | ok: [testbed-manager] 2025-09-08 00:31:52.206267 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:31:52.206278 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:31:52.206288 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:31:52.206298 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:31:52.206309 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:31:52.206319 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:31:52.206329 | orchestrator | 2025-09-08 00:31:52.206340 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-08 00:31:52.206358 | orchestrator | Monday 08 September 2025 00:31:52 +0000 (0:00:07.724) 0:07:16.021 ****** 2025-09-08 00:32:25.779308 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.779437 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:25.779475 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:25.779488 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:25.779500 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:25.779511 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:25.779522 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:25.779582 | orchestrator | 2025-09-08 00:32:25.779595 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-08 00:32:25.779607 | orchestrator | Monday 08 September 2025 00:31:53 +0000 (0:00:01.359) 0:07:17.380 ****** 2025-09-08 00:32:25.779618 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.779629 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:25.779640 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:25.779651 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:25.779661 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:25.779672 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:25.779683 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:25.779694 | orchestrator | 2025-09-08 00:32:25.779705 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-08 00:32:25.779716 | orchestrator | Monday 08 September 2025 00:31:55 +0000 (0:00:01.776) 0:07:19.157 ****** 2025-09-08 00:32:25.779726 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.779737 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:25.779747 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:25.779758 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:25.779769 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:25.779779 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:25.779790 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:25.779800 | orchestrator | 2025-09-08 00:32:25.779811 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-08 00:32:25.779822 | orchestrator | Monday 08 September 2025 00:31:57 +0000 (0:00:01.837) 0:07:20.994 ****** 2025-09-08 00:32:25.779834 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.779846 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.779859 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.779872 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.779885 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.779897 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.779910 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.779923 | orchestrator | 2025-09-08 00:32:25.779936 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-08 00:32:25.779948 | orchestrator | Monday 08 September 2025 00:31:58 +0000 (0:00:00.852) 0:07:21.846 ****** 2025-09-08 00:32:25.779961 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:25.779974 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:25.779986 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:25.779999 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:25.780012 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:25.780025 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:25.780037 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:25.780049 | orchestrator | 2025-09-08 00:32:25.780060 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-08 00:32:25.780070 | orchestrator | Monday 08 September 2025 00:31:58 +0000 (0:00:00.824) 0:07:22.671 ****** 2025-09-08 00:32:25.780081 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:25.780092 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:25.780102 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:25.780113 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:25.780124 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:25.780134 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:25.780145 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:25.780155 | orchestrator | 2025-09-08 00:32:25.780166 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-08 00:32:25.780177 | orchestrator | Monday 08 September 2025 00:31:59 +0000 (0:00:00.516) 0:07:23.187 ****** 2025-09-08 00:32:25.780195 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.780206 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.780217 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.780228 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.780239 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.780249 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.780260 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.780270 | orchestrator | 2025-09-08 00:32:25.780281 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-08 00:32:25.780292 | orchestrator | Monday 08 September 2025 00:32:00 +0000 (0:00:00.722) 0:07:23.910 ****** 2025-09-08 00:32:25.780303 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.780314 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.780324 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.780335 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.780345 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.780356 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.780367 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.780377 | orchestrator | 2025-09-08 00:32:25.780404 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-08 00:32:25.780416 | orchestrator | Monday 08 September 2025 00:32:00 +0000 (0:00:00.520) 0:07:24.430 ****** 2025-09-08 00:32:25.780427 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.780437 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.780448 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.780458 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.780469 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.780480 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.780490 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.780501 | orchestrator | 2025-09-08 00:32:25.780512 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-08 00:32:25.780523 | orchestrator | Monday 08 September 2025 00:32:01 +0000 (0:00:00.546) 0:07:24.977 ****** 2025-09-08 00:32:25.780549 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.780560 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.780571 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.780582 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.780592 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.780603 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.780613 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.780624 | orchestrator | 2025-09-08 00:32:25.780635 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-08 00:32:25.780646 | orchestrator | Monday 08 September 2025 00:32:06 +0000 (0:00:05.723) 0:07:30.700 ****** 2025-09-08 00:32:25.780656 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:25.780687 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:25.780699 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:25.780710 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:25.780720 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:25.780731 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:25.780742 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:25.780753 | orchestrator | 2025-09-08 00:32:25.780764 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-08 00:32:25.780774 | orchestrator | Monday 08 September 2025 00:32:07 +0000 (0:00:00.520) 0:07:31.220 ****** 2025-09-08 00:32:25.780787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:25.780800 | orchestrator | 2025-09-08 00:32:25.780811 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-08 00:32:25.780822 | orchestrator | Monday 08 September 2025 00:32:08 +0000 (0:00:01.049) 0:07:32.270 ****** 2025-09-08 00:32:25.780833 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.780843 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.780862 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.780873 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.780884 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.780894 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.780905 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.780916 | orchestrator | 2025-09-08 00:32:25.780927 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-08 00:32:25.780938 | orchestrator | Monday 08 September 2025 00:32:10 +0000 (0:00:01.849) 0:07:34.119 ****** 2025-09-08 00:32:25.780949 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.780959 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.780970 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.780981 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.780992 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.781002 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.781013 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.781024 | orchestrator | 2025-09-08 00:32:25.781035 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-08 00:32:25.781046 | orchestrator | Monday 08 September 2025 00:32:11 +0000 (0:00:01.169) 0:07:35.289 ****** 2025-09-08 00:32:25.781057 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.781068 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.781078 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:25.781089 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:25.781100 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:25.781110 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:25.781121 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:25.781132 | orchestrator | 2025-09-08 00:32:25.781143 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-08 00:32:25.781154 | orchestrator | Monday 08 September 2025 00:32:12 +0000 (0:00:01.064) 0:07:36.353 ****** 2025-09-08 00:32:25.781165 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:25.781178 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:25.781189 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:25.781200 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:25.781211 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:25.781222 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:25.781232 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-08 00:32:25.781243 | orchestrator | 2025-09-08 00:32:25.781254 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-08 00:32:25.781266 | orchestrator | Monday 08 September 2025 00:32:14 +0000 (0:00:01.749) 0:07:38.103 ****** 2025-09-08 00:32:25.781277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:25.781288 | orchestrator | 2025-09-08 00:32:25.781299 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-08 00:32:25.781310 | orchestrator | Monday 08 September 2025 00:32:15 +0000 (0:00:00.790) 0:07:38.894 ****** 2025-09-08 00:32:25.781320 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:25.781331 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:25.781349 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:25.781359 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:25.781370 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:25.781381 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:25.781392 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:25.781403 | orchestrator | 2025-09-08 00:32:25.781414 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-08 00:32:25.781425 | orchestrator | Monday 08 September 2025 00:32:24 +0000 (0:00:08.997) 0:07:47.891 ****** 2025-09-08 00:32:25.781436 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:25.781446 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:25.781463 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:40.367911 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:40.368030 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:40.368046 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:40.368057 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:40.368069 | orchestrator | 2025-09-08 00:32:40.368081 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-08 00:32:40.368093 | orchestrator | Monday 08 September 2025 00:32:25 +0000 (0:00:01.707) 0:07:49.599 ****** 2025-09-08 00:32:40.368104 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:40.368114 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:40.368125 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:40.368136 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:40.368147 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:40.368157 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:40.368168 | orchestrator | 2025-09-08 00:32:40.368179 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-08 00:32:40.368189 | orchestrator | Monday 08 September 2025 00:32:27 +0000 (0:00:01.300) 0:07:50.899 ****** 2025-09-08 00:32:40.368200 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:40.368212 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:40.368223 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:40.368233 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:40.368244 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:40.368255 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:40.368265 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:40.368276 | orchestrator | 2025-09-08 00:32:40.368286 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-08 00:32:40.368297 | orchestrator | 2025-09-08 00:32:40.368308 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-08 00:32:40.368319 | orchestrator | Monday 08 September 2025 00:32:28 +0000 (0:00:01.453) 0:07:52.353 ****** 2025-09-08 00:32:40.368330 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:40.368340 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:40.368352 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:40.368363 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:40.368373 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:40.368384 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:40.368395 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:40.368408 | orchestrator | 2025-09-08 00:32:40.368422 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-08 00:32:40.368435 | orchestrator | 2025-09-08 00:32:40.368448 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-08 00:32:40.368460 | orchestrator | Monday 08 September 2025 00:32:29 +0000 (0:00:00.529) 0:07:52.883 ****** 2025-09-08 00:32:40.368473 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:40.368486 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:40.368498 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:40.368534 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:40.368547 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:40.368559 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:40.368571 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:40.368584 | orchestrator | 2025-09-08 00:32:40.368623 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-08 00:32:40.368637 | orchestrator | Monday 08 September 2025 00:32:30 +0000 (0:00:01.310) 0:07:54.193 ****** 2025-09-08 00:32:40.368650 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:40.368662 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:40.368675 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:40.368687 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:40.368700 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:40.368712 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:40.368771 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:40.368784 | orchestrator | 2025-09-08 00:32:40.368795 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-08 00:32:40.368806 | orchestrator | Monday 08 September 2025 00:32:32 +0000 (0:00:01.664) 0:07:55.858 ****** 2025-09-08 00:32:40.368817 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:32:40.368827 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:32:40.368838 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:32:40.368849 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:32:40.368859 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:32:40.368870 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:32:40.368880 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:32:40.368891 | orchestrator | 2025-09-08 00:32:40.368901 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-08 00:32:40.368912 | orchestrator | Monday 08 September 2025 00:32:32 +0000 (0:00:00.842) 0:07:56.701 ****** 2025-09-08 00:32:40.368923 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:40.368933 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:40.368944 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:40.368954 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:40.368965 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:40.368980 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:40.368991 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:40.369002 | orchestrator | 2025-09-08 00:32:40.369013 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-08 00:32:40.369023 | orchestrator | 2025-09-08 00:32:40.369034 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-08 00:32:40.369045 | orchestrator | Monday 08 September 2025 00:32:34 +0000 (0:00:01.223) 0:07:57.925 ****** 2025-09-08 00:32:40.369056 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:40.369069 | orchestrator | 2025-09-08 00:32:40.369080 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-08 00:32:40.369091 | orchestrator | Monday 08 September 2025 00:32:35 +0000 (0:00:01.027) 0:07:58.952 ****** 2025-09-08 00:32:40.369101 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:40.369112 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:40.369123 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:40.369133 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:40.369144 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:40.369155 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:40.369165 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:40.369176 | orchestrator | 2025-09-08 00:32:40.369187 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-08 00:32:40.369215 | orchestrator | Monday 08 September 2025 00:32:35 +0000 (0:00:00.832) 0:07:59.785 ****** 2025-09-08 00:32:40.369226 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:40.369237 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:40.369247 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:40.369258 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:40.369268 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:40.369279 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:40.369289 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:40.369300 | orchestrator | 2025-09-08 00:32:40.369319 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-08 00:32:40.369330 | orchestrator | Monday 08 September 2025 00:32:37 +0000 (0:00:01.175) 0:08:00.961 ****** 2025-09-08 00:32:40.369340 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:32:40.369351 | orchestrator | 2025-09-08 00:32:40.369362 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-08 00:32:40.369372 | orchestrator | Monday 08 September 2025 00:32:38 +0000 (0:00:01.140) 0:08:02.102 ****** 2025-09-08 00:32:40.369383 | orchestrator | ok: [testbed-manager] 2025-09-08 00:32:40.369393 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:32:40.369404 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:32:40.369414 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:32:40.369425 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:32:40.369435 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:32:40.369446 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:32:40.369456 | orchestrator | 2025-09-08 00:32:40.369467 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-08 00:32:40.369478 | orchestrator | Monday 08 September 2025 00:32:39 +0000 (0:00:00.897) 0:08:02.999 ****** 2025-09-08 00:32:40.369488 | orchestrator | changed: [testbed-manager] 2025-09-08 00:32:40.369499 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:32:40.369544 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:32:40.369555 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:32:40.369566 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:32:40.369577 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:32:40.369587 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:32:40.369598 | orchestrator | 2025-09-08 00:32:40.369609 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:32:40.369621 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-08 00:32:40.369632 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-08 00:32:40.369643 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:40.369654 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:40.369665 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:40.369676 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:40.369687 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-08 00:32:40.369697 | orchestrator | 2025-09-08 00:32:40.369708 | orchestrator | 2025-09-08 00:32:40.369719 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:32:40.369730 | orchestrator | Monday 08 September 2025 00:32:40 +0000 (0:00:01.167) 0:08:04.167 ****** 2025-09-08 00:32:40.369740 | orchestrator | =============================================================================== 2025-09-08 00:32:40.369751 | orchestrator | osism.commons.packages : Install required packages --------------------- 82.01s 2025-09-08 00:32:40.369761 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.45s 2025-09-08 00:32:40.369772 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.71s 2025-09-08 00:32:40.369783 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.06s 2025-09-08 00:32:40.369801 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.43s 2025-09-08 00:32:40.369812 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.03s 2025-09-08 00:32:40.369824 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.77s 2025-09-08 00:32:40.369835 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.09s 2025-09-08 00:32:40.369845 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.03s 2025-09-08 00:32:40.369856 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.00s 2025-09-08 00:32:40.369867 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.53s 2025-09-08 00:32:40.369877 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.22s 2025-09-08 00:32:40.369888 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.15s 2025-09-08 00:32:40.369898 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.03s 2025-09-08 00:32:40.369909 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.72s 2025-09-08 00:32:40.369927 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.35s 2025-09-08 00:32:40.833644 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.39s 2025-09-08 00:32:40.833749 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.93s 2025-09-08 00:32:40.833762 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.86s 2025-09-08 00:32:40.833774 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.84s 2025-09-08 00:32:41.123186 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-08 00:32:41.123286 | orchestrator | + osism apply network 2025-09-08 00:32:53.944712 | orchestrator | 2025-09-08 00:32:53 | INFO  | Task 812d5278-4616-4d5c-b8c6-9d52c9313977 (network) was prepared for execution. 2025-09-08 00:32:53.944836 | orchestrator | 2025-09-08 00:32:53 | INFO  | It takes a moment until task 812d5278-4616-4d5c-b8c6-9d52c9313977 (network) has been started and output is visible here. 2025-09-08 00:33:22.989999 | orchestrator | 2025-09-08 00:33:22.990209 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-08 00:33:22.990253 | orchestrator | 2025-09-08 00:33:22.990274 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-08 00:33:22.990288 | orchestrator | Monday 08 September 2025 00:32:58 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-09-08 00:33:22.990300 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.990312 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:22.990323 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:22.990333 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:22.990344 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:22.990355 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:22.990366 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:22.990377 | orchestrator | 2025-09-08 00:33:22.990388 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-08 00:33:22.990399 | orchestrator | Monday 08 September 2025 00:32:59 +0000 (0:00:00.801) 0:00:01.072 ****** 2025-09-08 00:33:22.990411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:33:22.990451 | orchestrator | 2025-09-08 00:33:22.990463 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-08 00:33:22.990474 | orchestrator | Monday 08 September 2025 00:33:00 +0000 (0:00:01.220) 0:00:02.293 ****** 2025-09-08 00:33:22.990485 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:22.990495 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.990506 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:22.990517 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:22.990530 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:22.990571 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:22.990584 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:22.990597 | orchestrator | 2025-09-08 00:33:22.990610 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-08 00:33:22.990623 | orchestrator | Monday 08 September 2025 00:33:02 +0000 (0:00:02.047) 0:00:04.340 ****** 2025-09-08 00:33:22.990636 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.990648 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:22.990661 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:22.990675 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:22.990687 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:22.990699 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:22.990712 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:22.990725 | orchestrator | 2025-09-08 00:33:22.990738 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-08 00:33:22.990751 | orchestrator | Monday 08 September 2025 00:33:04 +0000 (0:00:01.834) 0:00:06.175 ****** 2025-09-08 00:33:22.990763 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-08 00:33:22.990777 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-08 00:33:22.990789 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-08 00:33:22.990801 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-08 00:33:22.990814 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-08 00:33:22.990827 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-08 00:33:22.990839 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-08 00:33:22.990852 | orchestrator | 2025-09-08 00:33:22.990865 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-08 00:33:22.990892 | orchestrator | Monday 08 September 2025 00:33:05 +0000 (0:00:01.023) 0:00:07.198 ****** 2025-09-08 00:33:22.990904 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 00:33:22.990915 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 00:33:22.990926 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 00:33:22.990937 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 00:33:22.990948 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:33:22.990958 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:33:22.990969 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 00:33:22.990980 | orchestrator | 2025-09-08 00:33:22.990991 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-08 00:33:22.991001 | orchestrator | Monday 08 September 2025 00:33:08 +0000 (0:00:03.328) 0:00:10.527 ****** 2025-09-08 00:33:22.991012 | orchestrator | changed: [testbed-manager] 2025-09-08 00:33:22.991023 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:33:22.991033 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:33:22.991044 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:33:22.991054 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:33:22.991065 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:33:22.991076 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:33:22.991086 | orchestrator | 2025-09-08 00:33:22.991097 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-08 00:33:22.991108 | orchestrator | Monday 08 September 2025 00:33:10 +0000 (0:00:01.468) 0:00:11.995 ****** 2025-09-08 00:33:22.991118 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:33:22.991129 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:33:22.991139 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 00:33:22.991150 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 00:33:22.991161 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 00:33:22.991171 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 00:33:22.991182 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 00:33:22.991192 | orchestrator | 2025-09-08 00:33:22.991203 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-08 00:33:22.991213 | orchestrator | Monday 08 September 2025 00:33:11 +0000 (0:00:01.951) 0:00:13.947 ****** 2025-09-08 00:33:22.991233 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.991244 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:22.991255 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:22.991266 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:22.991276 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:22.991287 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:22.991297 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:22.991308 | orchestrator | 2025-09-08 00:33:22.991319 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-08 00:33:22.991350 | orchestrator | Monday 08 September 2025 00:33:13 +0000 (0:00:01.115) 0:00:15.062 ****** 2025-09-08 00:33:22.991362 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:22.991372 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:22.991383 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:22.991394 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:22.991404 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:22.991415 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:22.991442 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:22.991454 | orchestrator | 2025-09-08 00:33:22.991465 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-08 00:33:22.991476 | orchestrator | Monday 08 September 2025 00:33:13 +0000 (0:00:00.718) 0:00:15.781 ****** 2025-09-08 00:33:22.991486 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.991497 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:22.991508 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:22.991518 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:22.991529 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:22.991540 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:22.991550 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:22.991561 | orchestrator | 2025-09-08 00:33:22.991572 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-08 00:33:22.991583 | orchestrator | Monday 08 September 2025 00:33:16 +0000 (0:00:02.242) 0:00:18.023 ****** 2025-09-08 00:33:22.991593 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:22.991604 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:22.991615 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:22.991626 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:22.991636 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:22.991647 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:22.991658 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-08 00:33:22.991670 | orchestrator | 2025-09-08 00:33:22.991681 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-08 00:33:22.991692 | orchestrator | Monday 08 September 2025 00:33:17 +0000 (0:00:00.959) 0:00:18.983 ****** 2025-09-08 00:33:22.991703 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.991713 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:33:22.991724 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:33:22.991735 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:33:22.991745 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:33:22.991756 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:33:22.991767 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:33:22.991778 | orchestrator | 2025-09-08 00:33:22.991789 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-08 00:33:22.991799 | orchestrator | Monday 08 September 2025 00:33:18 +0000 (0:00:01.645) 0:00:20.629 ****** 2025-09-08 00:33:22.991810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:33:22.991823 | orchestrator | 2025-09-08 00:33:22.991834 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-08 00:33:22.991852 | orchestrator | Monday 08 September 2025 00:33:19 +0000 (0:00:01.276) 0:00:21.906 ****** 2025-09-08 00:33:22.991863 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:22.991874 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.991884 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:22.991895 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:22.991911 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:22.991922 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:22.991932 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:22.991943 | orchestrator | 2025-09-08 00:33:22.991954 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-08 00:33:22.991965 | orchestrator | Monday 08 September 2025 00:33:20 +0000 (0:00:00.974) 0:00:22.880 ****** 2025-09-08 00:33:22.991976 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:22.991987 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:22.991997 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:22.992008 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:22.992018 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:22.992029 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:22.992040 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:22.992050 | orchestrator | 2025-09-08 00:33:22.992061 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-08 00:33:22.992072 | orchestrator | Monday 08 September 2025 00:33:21 +0000 (0:00:00.858) 0:00:23.739 ****** 2025-09-08 00:33:22.992083 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:22.992093 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:22.992104 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:22.992114 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:22.992125 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:22.992136 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:22.992146 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:22.992157 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:22.992168 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:22.992178 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-08 00:33:22.992189 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:22.992200 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:22.992225 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:22.992247 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-08 00:33:22.992258 | orchestrator | 2025-09-08 00:33:22.992277 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-08 00:33:40.263817 | orchestrator | Monday 08 September 2025 00:33:22 +0000 (0:00:01.197) 0:00:24.936 ****** 2025-09-08 00:33:40.263965 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:40.263985 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:40.264028 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:40.264041 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:40.264053 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:40.264064 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:40.264075 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:40.264086 | orchestrator | 2025-09-08 00:33:40.264098 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-08 00:33:40.264109 | orchestrator | Monday 08 September 2025 00:33:23 +0000 (0:00:00.661) 0:00:25.598 ****** 2025-09-08 00:33:40.264122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-09-08 00:33:40.264164 | orchestrator | 2025-09-08 00:33:40.264176 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-08 00:33:40.264187 | orchestrator | Monday 08 September 2025 00:33:28 +0000 (0:00:04.736) 0:00:30.334 ****** 2025-09-08 00:33:40.264199 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264237 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264447 | orchestrator | 2025-09-08 00:33:40.264461 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-08 00:33:40.264473 | orchestrator | Monday 08 September 2025 00:33:34 +0000 (0:00:05.814) 0:00:36.149 ****** 2025-09-08 00:33:40.264486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264500 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264552 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-08 00:33:40.264630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:40.264663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:46.714833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-08 00:33:46.714952 | orchestrator | 2025-09-08 00:33:46.714969 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-08 00:33:46.714983 | orchestrator | Monday 08 September 2025 00:33:40 +0000 (0:00:06.061) 0:00:42.210 ****** 2025-09-08 00:33:46.714996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:33:46.715008 | orchestrator | 2025-09-08 00:33:46.715019 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-08 00:33:46.715030 | orchestrator | Monday 08 September 2025 00:33:41 +0000 (0:00:01.268) 0:00:43.478 ****** 2025-09-08 00:33:46.715041 | orchestrator | ok: [testbed-manager] 2025-09-08 00:33:46.715054 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:33:46.715085 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:33:46.715097 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:33:46.715107 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:33:46.715118 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:33:46.715128 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:33:46.715139 | orchestrator | 2025-09-08 00:33:46.715150 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-08 00:33:46.715161 | orchestrator | Monday 08 September 2025 00:33:42 +0000 (0:00:01.238) 0:00:44.717 ****** 2025-09-08 00:33:46.715172 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:46.715183 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:46.715194 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:46.715205 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:46.715215 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:46.715226 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:46.715237 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:46.715247 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:46.715258 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:46.715270 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:46.715280 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:46.715296 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:46.715307 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:46.715317 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:46.715328 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:46.715339 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:46.715349 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:46.715417 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:46.715432 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:46.715445 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:46.715458 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:46.715471 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:46.715484 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:46.715496 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:46.715509 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:46.715521 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:46.715534 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:46.715547 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:46.715559 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:46.715571 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:46.715583 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-08 00:33:46.715595 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-08 00:33:46.715608 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-08 00:33:46.715620 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-08 00:33:46.715633 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:46.715645 | orchestrator | 2025-09-08 00:33:46.715658 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-08 00:33:46.715690 | orchestrator | Monday 08 September 2025 00:33:44 +0000 (0:00:02.080) 0:00:46.798 ****** 2025-09-08 00:33:46.715703 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:46.715717 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:46.715727 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:46.715737 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:46.715748 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:46.715759 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:46.715769 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:46.715780 | orchestrator | 2025-09-08 00:33:46.715791 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-08 00:33:46.715801 | orchestrator | Monday 08 September 2025 00:33:45 +0000 (0:00:00.677) 0:00:47.476 ****** 2025-09-08 00:33:46.715812 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:33:46.715822 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:33:46.715833 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:33:46.715843 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:33:46.715854 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:33:46.715865 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:33:46.715875 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:33:46.715886 | orchestrator | 2025-09-08 00:33:46.715897 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:33:46.715909 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:33:46.715921 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:46.715932 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:46.715943 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:46.715962 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:46.715973 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:46.715984 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 00:33:46.715995 | orchestrator | 2025-09-08 00:33:46.716005 | orchestrator | 2025-09-08 00:33:46.716016 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:33:46.716027 | orchestrator | Monday 08 September 2025 00:33:46 +0000 (0:00:00.770) 0:00:48.246 ****** 2025-09-08 00:33:46.716043 | orchestrator | =============================================================================== 2025-09-08 00:33:46.716054 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.06s 2025-09-08 00:33:46.716064 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.81s 2025-09-08 00:33:46.716075 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.74s 2025-09-08 00:33:46.716086 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.33s 2025-09-08 00:33:46.716096 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.24s 2025-09-08 00:33:46.716107 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.08s 2025-09-08 00:33:46.716118 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.05s 2025-09-08 00:33:46.716128 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2025-09-08 00:33:46.716139 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2025-09-08 00:33:46.716149 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.65s 2025-09-08 00:33:46.716160 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.47s 2025-09-08 00:33:46.716170 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.28s 2025-09-08 00:33:46.716181 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2025-09-08 00:33:46.716192 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.24s 2025-09-08 00:33:46.716202 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-09-08 00:33:46.716213 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-09-08 00:33:46.716224 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-09-08 00:33:46.716234 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2025-09-08 00:33:46.716245 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-09-08 00:33:46.716256 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.96s 2025-09-08 00:33:47.016798 | orchestrator | + osism apply wireguard 2025-09-08 00:33:58.980616 | orchestrator | 2025-09-08 00:33:58 | INFO  | Task 1b4e623d-fa34-4220-beb8-95b2b9675752 (wireguard) was prepared for execution. 2025-09-08 00:33:58.980737 | orchestrator | 2025-09-08 00:33:58 | INFO  | It takes a moment until task 1b4e623d-fa34-4220-beb8-95b2b9675752 (wireguard) has been started and output is visible here. 2025-09-08 00:34:19.210222 | orchestrator | 2025-09-08 00:34:19.210397 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-08 00:34:19.210418 | orchestrator | 2025-09-08 00:34:19.210431 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-08 00:34:19.210442 | orchestrator | Monday 08 September 2025 00:34:03 +0000 (0:00:00.228) 0:00:00.228 ****** 2025-09-08 00:34:19.210454 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:19.210493 | orchestrator | 2025-09-08 00:34:19.210505 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-08 00:34:19.210515 | orchestrator | Monday 08 September 2025 00:34:04 +0000 (0:00:01.579) 0:00:01.807 ****** 2025-09-08 00:34:19.210526 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:19.210537 | orchestrator | 2025-09-08 00:34:19.210548 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-08 00:34:19.210559 | orchestrator | Monday 08 September 2025 00:34:11 +0000 (0:00:06.733) 0:00:08.540 ****** 2025-09-08 00:34:19.210571 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:19.210582 | orchestrator | 2025-09-08 00:34:19.210592 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-08 00:34:19.210603 | orchestrator | Monday 08 September 2025 00:34:11 +0000 (0:00:00.572) 0:00:09.113 ****** 2025-09-08 00:34:19.210614 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:19.210625 | orchestrator | 2025-09-08 00:34:19.210635 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-08 00:34:19.210646 | orchestrator | Monday 08 September 2025 00:34:12 +0000 (0:00:00.466) 0:00:09.579 ****** 2025-09-08 00:34:19.210657 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:19.210667 | orchestrator | 2025-09-08 00:34:19.210678 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-08 00:34:19.210689 | orchestrator | Monday 08 September 2025 00:34:12 +0000 (0:00:00.529) 0:00:10.109 ****** 2025-09-08 00:34:19.210699 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:19.210710 | orchestrator | 2025-09-08 00:34:19.210720 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-08 00:34:19.210731 | orchestrator | Monday 08 September 2025 00:34:13 +0000 (0:00:00.547) 0:00:10.657 ****** 2025-09-08 00:34:19.210742 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:19.210754 | orchestrator | 2025-09-08 00:34:19.210767 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-08 00:34:19.210779 | orchestrator | Monday 08 September 2025 00:34:13 +0000 (0:00:00.440) 0:00:11.097 ****** 2025-09-08 00:34:19.210791 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:19.210804 | orchestrator | 2025-09-08 00:34:19.210816 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-08 00:34:19.210828 | orchestrator | Monday 08 September 2025 00:34:15 +0000 (0:00:01.226) 0:00:12.324 ****** 2025-09-08 00:34:19.210841 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-08 00:34:19.210854 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:19.210866 | orchestrator | 2025-09-08 00:34:19.210878 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-08 00:34:19.210890 | orchestrator | Monday 08 September 2025 00:34:16 +0000 (0:00:00.918) 0:00:13.242 ****** 2025-09-08 00:34:19.210902 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:19.210914 | orchestrator | 2025-09-08 00:34:19.210940 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-08 00:34:19.210954 | orchestrator | Monday 08 September 2025 00:34:17 +0000 (0:00:01.766) 0:00:15.009 ****** 2025-09-08 00:34:19.210966 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:19.210979 | orchestrator | 2025-09-08 00:34:19.210992 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:34:19.211005 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:34:19.211018 | orchestrator | 2025-09-08 00:34:19.211031 | orchestrator | 2025-09-08 00:34:19.211043 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:34:19.211056 | orchestrator | Monday 08 September 2025 00:34:18 +0000 (0:00:00.993) 0:00:16.003 ****** 2025-09-08 00:34:19.211069 | orchestrator | =============================================================================== 2025-09-08 00:34:19.211081 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.73s 2025-09-08 00:34:19.211094 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2025-09-08 00:34:19.211118 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.58s 2025-09-08 00:34:19.211128 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2025-09-08 00:34:19.211139 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2025-09-08 00:34:19.211150 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-09-08 00:34:19.211160 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-09-08 00:34:19.211171 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2025-09-08 00:34:19.211181 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-09-08 00:34:19.211192 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2025-09-08 00:34:19.211203 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-09-08 00:34:19.538583 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-08 00:34:19.577072 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-08 00:34:19.577126 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-08 00:34:19.660939 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 177 0 --:--:-- --:--:-- --:--:-- 180 2025-09-08 00:34:19.673992 | orchestrator | + osism apply --environment custom workarounds 2025-09-08 00:34:21.550499 | orchestrator | 2025-09-08 00:34:21 | INFO  | Trying to run play workarounds in environment custom 2025-09-08 00:34:31.644825 | orchestrator | 2025-09-08 00:34:31 | INFO  | Task 954a85c5-ee7e-4fb2-93ac-81f1e1f54671 (workarounds) was prepared for execution. 2025-09-08 00:34:31.644963 | orchestrator | 2025-09-08 00:34:31 | INFO  | It takes a moment until task 954a85c5-ee7e-4fb2-93ac-81f1e1f54671 (workarounds) has been started and output is visible here. 2025-09-08 00:34:57.149635 | orchestrator | 2025-09-08 00:34:57.149757 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:34:57.149775 | orchestrator | 2025-09-08 00:34:57.149788 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-08 00:34:57.149799 | orchestrator | Monday 08 September 2025 00:34:35 +0000 (0:00:00.157) 0:00:00.157 ****** 2025-09-08 00:34:57.149810 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-08 00:34:57.149821 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-08 00:34:57.149832 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-08 00:34:57.149843 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-08 00:34:57.149854 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-08 00:34:57.149864 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-08 00:34:57.149875 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-08 00:34:57.149886 | orchestrator | 2025-09-08 00:34:57.149896 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-08 00:34:57.149907 | orchestrator | 2025-09-08 00:34:57.149918 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-08 00:34:57.149929 | orchestrator | Monday 08 September 2025 00:34:36 +0000 (0:00:00.778) 0:00:00.936 ****** 2025-09-08 00:34:57.149940 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:57.149952 | orchestrator | 2025-09-08 00:34:57.149963 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-08 00:34:57.149974 | orchestrator | 2025-09-08 00:34:57.149984 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-08 00:34:57.149995 | orchestrator | Monday 08 September 2025 00:34:38 +0000 (0:00:02.488) 0:00:03.425 ****** 2025-09-08 00:34:57.150090 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:34:57.150106 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:34:57.150116 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:34:57.150127 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:34:57.150138 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:34:57.150148 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:34:57.150159 | orchestrator | 2025-09-08 00:34:57.150171 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-08 00:34:57.150184 | orchestrator | 2025-09-08 00:34:57.150196 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-08 00:34:57.150216 | orchestrator | Monday 08 September 2025 00:34:40 +0000 (0:00:01.856) 0:00:05.281 ****** 2025-09-08 00:34:57.150230 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:57.150245 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:57.150281 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:57.150294 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:57.150307 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:57.150319 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-08 00:34:57.150331 | orchestrator | 2025-09-08 00:34:57.150344 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-08 00:34:57.150356 | orchestrator | Monday 08 September 2025 00:34:42 +0000 (0:00:01.536) 0:00:06.817 ****** 2025-09-08 00:34:57.150370 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:57.150382 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:57.150395 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:57.150407 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:57.150420 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:57.150432 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:57.150444 | orchestrator | 2025-09-08 00:34:57.150457 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-08 00:34:57.150470 | orchestrator | Monday 08 September 2025 00:34:46 +0000 (0:00:03.784) 0:00:10.602 ****** 2025-09-08 00:34:57.150482 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:34:57.150495 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:34:57.150507 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:34:57.150521 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:34:57.150532 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:34:57.150543 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:34:57.150554 | orchestrator | 2025-09-08 00:34:57.150565 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-08 00:34:57.150576 | orchestrator | 2025-09-08 00:34:57.150587 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-08 00:34:57.150598 | orchestrator | Monday 08 September 2025 00:34:46 +0000 (0:00:00.827) 0:00:11.429 ****** 2025-09-08 00:34:57.150609 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:57.150620 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:57.150630 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:57.150641 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:57.150652 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:57.150662 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:57.150673 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:57.150684 | orchestrator | 2025-09-08 00:34:57.150695 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-08 00:34:57.150705 | orchestrator | Monday 08 September 2025 00:34:48 +0000 (0:00:01.658) 0:00:13.088 ****** 2025-09-08 00:34:57.150716 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:57.150735 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:57.150746 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:57.150757 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:57.150767 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:57.150778 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:57.150809 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:57.150821 | orchestrator | 2025-09-08 00:34:57.150832 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-08 00:34:57.150843 | orchestrator | Monday 08 September 2025 00:34:50 +0000 (0:00:01.773) 0:00:14.862 ****** 2025-09-08 00:34:57.150854 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:34:57.150865 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:34:57.150876 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:34:57.150887 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:34:57.150898 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:34:57.150909 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:57.150919 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:34:57.150930 | orchestrator | 2025-09-08 00:34:57.150941 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-08 00:34:57.150952 | orchestrator | Monday 08 September 2025 00:34:51 +0000 (0:00:01.492) 0:00:16.354 ****** 2025-09-08 00:34:57.150963 | orchestrator | changed: [testbed-manager] 2025-09-08 00:34:57.150974 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:34:57.150985 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:34:57.150995 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:34:57.151006 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:34:57.151017 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:34:57.151027 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:34:57.151038 | orchestrator | 2025-09-08 00:34:57.151049 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-08 00:34:57.151060 | orchestrator | Monday 08 September 2025 00:34:53 +0000 (0:00:01.808) 0:00:18.163 ****** 2025-09-08 00:34:57.151071 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:34:57.151081 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:34:57.151092 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:34:57.151103 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:34:57.151114 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:34:57.151125 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:34:57.151136 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:34:57.151147 | orchestrator | 2025-09-08 00:34:57.151158 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-08 00:34:57.151169 | orchestrator | 2025-09-08 00:34:57.151179 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-08 00:34:57.151190 | orchestrator | Monday 08 September 2025 00:34:54 +0000 (0:00:00.627) 0:00:18.790 ****** 2025-09-08 00:34:57.151201 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:34:57.151212 | orchestrator | ok: [testbed-manager] 2025-09-08 00:34:57.151223 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:34:57.151234 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:34:57.151244 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:34:57.151290 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:34:57.151307 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:34:57.151318 | orchestrator | 2025-09-08 00:34:57.151329 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:34:57.151341 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:34:57.151354 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:57.151365 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:57.151376 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:57.151394 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:57.151405 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:57.151416 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:34:57.151427 | orchestrator | 2025-09-08 00:34:57.151438 | orchestrator | 2025-09-08 00:34:57.151449 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:34:57.151460 | orchestrator | Monday 08 September 2025 00:34:57 +0000 (0:00:02.796) 0:00:21.586 ****** 2025-09-08 00:34:57.151470 | orchestrator | =============================================================================== 2025-09-08 00:34:57.151481 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.78s 2025-09-08 00:34:57.151492 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2025-09-08 00:34:57.151503 | orchestrator | Apply netplan configuration --------------------------------------------- 2.49s 2025-09-08 00:34:57.151514 | orchestrator | Apply netplan configuration --------------------------------------------- 1.86s 2025-09-08 00:34:57.151525 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.81s 2025-09-08 00:34:57.151535 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.77s 2025-09-08 00:34:57.151546 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.66s 2025-09-08 00:34:57.151557 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2025-09-08 00:34:57.151568 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-09-08 00:34:57.151579 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.83s 2025-09-08 00:34:57.151589 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2025-09-08 00:34:57.151607 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-08 00:34:57.774118 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-08 00:35:09.795666 | orchestrator | 2025-09-08 00:35:09 | INFO  | Task 4f1278ac-808b-4832-9146-d932434bfc27 (reboot) was prepared for execution. 2025-09-08 00:35:09.795774 | orchestrator | 2025-09-08 00:35:09 | INFO  | It takes a moment until task 4f1278ac-808b-4832-9146-d932434bfc27 (reboot) has been started and output is visible here. 2025-09-08 00:35:20.137795 | orchestrator | 2025-09-08 00:35:20.137936 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:20.137953 | orchestrator | 2025-09-08 00:35:20.137965 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:20.137978 | orchestrator | Monday 08 September 2025 00:35:13 +0000 (0:00:00.244) 0:00:00.244 ****** 2025-09-08 00:35:20.137989 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:35:20.138001 | orchestrator | 2025-09-08 00:35:20.138013 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:20.138083 | orchestrator | Monday 08 September 2025 00:35:13 +0000 (0:00:00.093) 0:00:00.338 ****** 2025-09-08 00:35:20.138095 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:35:20.138106 | orchestrator | 2025-09-08 00:35:20.138118 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:20.138129 | orchestrator | Monday 08 September 2025 00:35:14 +0000 (0:00:00.980) 0:00:01.318 ****** 2025-09-08 00:35:20.138140 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:35:20.138151 | orchestrator | 2025-09-08 00:35:20.138161 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:20.138204 | orchestrator | 2025-09-08 00:35:20.138246 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:20.138257 | orchestrator | Monday 08 September 2025 00:35:15 +0000 (0:00:00.140) 0:00:01.459 ****** 2025-09-08 00:35:20.138268 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:35:20.138279 | orchestrator | 2025-09-08 00:35:20.138290 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:20.138300 | orchestrator | Monday 08 September 2025 00:35:15 +0000 (0:00:00.109) 0:00:01.568 ****** 2025-09-08 00:35:20.138314 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:35:20.138326 | orchestrator | 2025-09-08 00:35:20.138339 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:20.138370 | orchestrator | Monday 08 September 2025 00:35:15 +0000 (0:00:00.657) 0:00:02.225 ****** 2025-09-08 00:35:20.138383 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:35:20.138395 | orchestrator | 2025-09-08 00:35:20.138408 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:20.138421 | orchestrator | 2025-09-08 00:35:20.138433 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:20.138446 | orchestrator | Monday 08 September 2025 00:35:15 +0000 (0:00:00.134) 0:00:02.360 ****** 2025-09-08 00:35:20.138458 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:35:20.138471 | orchestrator | 2025-09-08 00:35:20.138483 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:20.138496 | orchestrator | Monday 08 September 2025 00:35:16 +0000 (0:00:00.234) 0:00:02.594 ****** 2025-09-08 00:35:20.138509 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:35:20.138522 | orchestrator | 2025-09-08 00:35:20.138539 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:20.138550 | orchestrator | Monday 08 September 2025 00:35:16 +0000 (0:00:00.693) 0:00:03.288 ****** 2025-09-08 00:35:20.138561 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:35:20.138572 | orchestrator | 2025-09-08 00:35:20.138583 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:20.138593 | orchestrator | 2025-09-08 00:35:20.138604 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:20.138615 | orchestrator | Monday 08 September 2025 00:35:17 +0000 (0:00:00.128) 0:00:03.417 ****** 2025-09-08 00:35:20.138626 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:35:20.138636 | orchestrator | 2025-09-08 00:35:20.138647 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:20.138658 | orchestrator | Monday 08 September 2025 00:35:17 +0000 (0:00:00.113) 0:00:03.531 ****** 2025-09-08 00:35:20.138669 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:35:20.138679 | orchestrator | 2025-09-08 00:35:20.138690 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:20.138701 | orchestrator | Monday 08 September 2025 00:35:17 +0000 (0:00:00.718) 0:00:04.250 ****** 2025-09-08 00:35:20.138712 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:35:20.138723 | orchestrator | 2025-09-08 00:35:20.138734 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:20.138744 | orchestrator | 2025-09-08 00:35:20.138755 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:20.138766 | orchestrator | Monday 08 September 2025 00:35:17 +0000 (0:00:00.119) 0:00:04.369 ****** 2025-09-08 00:35:20.138777 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:35:20.138787 | orchestrator | 2025-09-08 00:35:20.138798 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:20.138809 | orchestrator | Monday 08 September 2025 00:35:18 +0000 (0:00:00.100) 0:00:04.470 ****** 2025-09-08 00:35:20.138819 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:35:20.138830 | orchestrator | 2025-09-08 00:35:20.138841 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:20.138852 | orchestrator | Monday 08 September 2025 00:35:18 +0000 (0:00:00.724) 0:00:05.194 ****** 2025-09-08 00:35:20.138875 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:35:20.138886 | orchestrator | 2025-09-08 00:35:20.138896 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-08 00:35:20.138908 | orchestrator | 2025-09-08 00:35:20.138918 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-08 00:35:20.138929 | orchestrator | Monday 08 September 2025 00:35:18 +0000 (0:00:00.137) 0:00:05.332 ****** 2025-09-08 00:35:20.138940 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:35:20.138951 | orchestrator | 2025-09-08 00:35:20.138962 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-08 00:35:20.138972 | orchestrator | Monday 08 September 2025 00:35:19 +0000 (0:00:00.109) 0:00:05.442 ****** 2025-09-08 00:35:20.138983 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:35:20.138994 | orchestrator | 2025-09-08 00:35:20.139004 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-08 00:35:20.139015 | orchestrator | Monday 08 September 2025 00:35:19 +0000 (0:00:00.679) 0:00:06.121 ****** 2025-09-08 00:35:20.139046 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:35:20.139057 | orchestrator | 2025-09-08 00:35:20.139068 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:35:20.139080 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:20.139093 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:20.139104 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:20.139115 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:20.139126 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:20.139137 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:35:20.139147 | orchestrator | 2025-09-08 00:35:20.139158 | orchestrator | 2025-09-08 00:35:20.139169 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:35:20.139180 | orchestrator | Monday 08 September 2025 00:35:19 +0000 (0:00:00.036) 0:00:06.158 ****** 2025-09-08 00:35:20.139191 | orchestrator | =============================================================================== 2025-09-08 00:35:20.139202 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.45s 2025-09-08 00:35:20.139229 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2025-09-08 00:35:20.139241 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.70s 2025-09-08 00:35:20.431863 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-08 00:35:32.432290 | orchestrator | 2025-09-08 00:35:32 | INFO  | Task ae240cc1-5fae-4898-b3a5-43b8820ffb1e (wait-for-connection) was prepared for execution. 2025-09-08 00:35:32.432426 | orchestrator | 2025-09-08 00:35:32 | INFO  | It takes a moment until task ae240cc1-5fae-4898-b3a5-43b8820ffb1e (wait-for-connection) has been started and output is visible here. 2025-09-08 00:35:48.499671 | orchestrator | 2025-09-08 00:35:48.499811 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-08 00:35:48.499829 | orchestrator | 2025-09-08 00:35:48.499842 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-08 00:35:48.499854 | orchestrator | Monday 08 September 2025 00:35:36 +0000 (0:00:00.258) 0:00:00.258 ****** 2025-09-08 00:35:48.499901 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:35:48.499914 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:35:48.499926 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:35:48.499936 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:35:48.499947 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:35:48.499958 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:35:48.499968 | orchestrator | 2025-09-08 00:35:48.499979 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:35:48.499991 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:48.500025 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:48.500037 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:48.500048 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:48.500059 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:48.500070 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:35:48.500080 | orchestrator | 2025-09-08 00:35:48.500091 | orchestrator | 2025-09-08 00:35:48.500103 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:35:48.500114 | orchestrator | Monday 08 September 2025 00:35:48 +0000 (0:00:11.575) 0:00:11.834 ****** 2025-09-08 00:35:48.500125 | orchestrator | =============================================================================== 2025-09-08 00:35:48.500136 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2025-09-08 00:35:48.795429 | orchestrator | + osism apply hddtemp 2025-09-08 00:36:00.814212 | orchestrator | 2025-09-08 00:36:00 | INFO  | Task eb991826-bf53-4ff1-8aa3-c7dac4d90537 (hddtemp) was prepared for execution. 2025-09-08 00:36:00.814327 | orchestrator | 2025-09-08 00:36:00 | INFO  | It takes a moment until task eb991826-bf53-4ff1-8aa3-c7dac4d90537 (hddtemp) has been started and output is visible here. 2025-09-08 00:36:28.443870 | orchestrator | 2025-09-08 00:36:28.443996 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-08 00:36:28.444014 | orchestrator | 2025-09-08 00:36:28.444026 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-08 00:36:28.444038 | orchestrator | Monday 08 September 2025 00:36:04 +0000 (0:00:00.265) 0:00:00.265 ****** 2025-09-08 00:36:28.444049 | orchestrator | ok: [testbed-manager] 2025-09-08 00:36:28.444061 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:36:28.444072 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:36:28.444083 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:36:28.444125 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:36:28.444137 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:36:28.444148 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:36:28.444159 | orchestrator | 2025-09-08 00:36:28.444170 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-08 00:36:28.444181 | orchestrator | Monday 08 September 2025 00:36:05 +0000 (0:00:00.714) 0:00:00.980 ****** 2025-09-08 00:36:28.444193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:36:28.444207 | orchestrator | 2025-09-08 00:36:28.444219 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-08 00:36:28.444229 | orchestrator | Monday 08 September 2025 00:36:06 +0000 (0:00:01.289) 0:00:02.269 ****** 2025-09-08 00:36:28.444240 | orchestrator | ok: [testbed-manager] 2025-09-08 00:36:28.444276 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:36:28.444288 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:36:28.444299 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:36:28.444309 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:36:28.444320 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:36:28.444331 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:36:28.444342 | orchestrator | 2025-09-08 00:36:28.444353 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-08 00:36:28.444379 | orchestrator | Monday 08 September 2025 00:36:09 +0000 (0:00:02.133) 0:00:04.403 ****** 2025-09-08 00:36:28.444390 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:36:28.444404 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:28.444416 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:36:28.444430 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:36:28.444442 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:36:28.444455 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:36:28.444467 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:36:28.444480 | orchestrator | 2025-09-08 00:36:28.444492 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-08 00:36:28.444505 | orchestrator | Monday 08 September 2025 00:36:10 +0000 (0:00:01.209) 0:00:05.613 ****** 2025-09-08 00:36:28.444519 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:36:28.444531 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:36:28.444542 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:36:28.444555 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:36:28.444567 | orchestrator | ok: [testbed-manager] 2025-09-08 00:36:28.444580 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:36:28.444593 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:36:28.444605 | orchestrator | 2025-09-08 00:36:28.444617 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-08 00:36:28.444631 | orchestrator | Monday 08 September 2025 00:36:11 +0000 (0:00:01.220) 0:00:06.834 ****** 2025-09-08 00:36:28.444643 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:36:28.444656 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:36:28.444668 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:36:28.444680 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:28.444693 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:36:28.444705 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:36:28.444718 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:36:28.444730 | orchestrator | 2025-09-08 00:36:28.444743 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-08 00:36:28.444754 | orchestrator | Monday 08 September 2025 00:36:12 +0000 (0:00:00.910) 0:00:07.744 ****** 2025-09-08 00:36:28.444764 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:28.444775 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:36:28.444785 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:36:28.444796 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:36:28.444806 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:36:28.444817 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:36:28.444827 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:36:28.444838 | orchestrator | 2025-09-08 00:36:28.444849 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-08 00:36:28.444859 | orchestrator | Monday 08 September 2025 00:36:24 +0000 (0:00:12.273) 0:00:20.018 ****** 2025-09-08 00:36:28.444870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:36:28.444881 | orchestrator | 2025-09-08 00:36:28.444892 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-08 00:36:28.444903 | orchestrator | Monday 08 September 2025 00:36:26 +0000 (0:00:01.419) 0:00:21.437 ****** 2025-09-08 00:36:28.444913 | orchestrator | changed: [testbed-manager] 2025-09-08 00:36:28.444924 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:36:28.444945 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:36:28.444956 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:36:28.444967 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:36:28.444978 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:36:28.444988 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:36:28.444999 | orchestrator | 2025-09-08 00:36:28.445010 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:36:28.445021 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:36:28.445052 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:28.445064 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:28.445075 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:28.445086 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:28.445116 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:28.445127 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:36:28.445138 | orchestrator | 2025-09-08 00:36:28.445149 | orchestrator | 2025-09-08 00:36:28.445160 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:36:28.445171 | orchestrator | Monday 08 September 2025 00:36:28 +0000 (0:00:01.908) 0:00:23.346 ****** 2025-09-08 00:36:28.445182 | orchestrator | =============================================================================== 2025-09-08 00:36:28.445192 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.27s 2025-09-08 00:36:28.445203 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.13s 2025-09-08 00:36:28.445214 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2025-09-08 00:36:28.445231 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-09-08 00:36:28.445241 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.29s 2025-09-08 00:36:28.445252 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2025-09-08 00:36:28.445263 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.21s 2025-09-08 00:36:28.445274 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.91s 2025-09-08 00:36:28.445284 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-09-08 00:36:28.762849 | orchestrator | ++ semver 9.2.0 7.1.1 2025-09-08 00:36:28.823998 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-08 00:36:28.824079 | orchestrator | + sudo systemctl restart manager.service 2025-09-08 00:36:42.155163 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-08 00:36:42.155289 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-08 00:36:42.155306 | orchestrator | + local max_attempts=60 2025-09-08 00:36:42.155321 | orchestrator | + local name=ceph-ansible 2025-09-08 00:36:42.155332 | orchestrator | + local attempt_num=1 2025-09-08 00:36:42.155343 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:36:42.199336 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:36:42.199363 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:36:42.199375 | orchestrator | + sleep 5 2025-09-08 00:36:47.203548 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:36:47.237726 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:36:47.237807 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:36:47.237820 | orchestrator | + sleep 5 2025-09-08 00:36:52.242245 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:36:52.278222 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:36:52.278272 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:36:52.278286 | orchestrator | + sleep 5 2025-09-08 00:36:57.282274 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:36:57.324331 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:36:57.324381 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:36:57.324389 | orchestrator | + sleep 5 2025-09-08 00:37:02.329236 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:02.367251 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:02.367374 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:02.367400 | orchestrator | + sleep 5 2025-09-08 00:37:07.372201 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:07.414449 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:07.414531 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:07.414545 | orchestrator | + sleep 5 2025-09-08 00:37:12.419217 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:12.459449 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:12.459528 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:12.459560 | orchestrator | + sleep 5 2025-09-08 00:37:17.465650 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:17.536278 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:17.536370 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:17.536385 | orchestrator | + sleep 5 2025-09-08 00:37:22.538691 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:22.588517 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:22.588593 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:22.588606 | orchestrator | + sleep 5 2025-09-08 00:37:27.592839 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:27.631885 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:27.632032 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:27.632050 | orchestrator | + sleep 5 2025-09-08 00:37:32.636741 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:32.680876 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:32.680978 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:32.680993 | orchestrator | + sleep 5 2025-09-08 00:37:37.686770 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:37.731073 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:37.731172 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:37.731185 | orchestrator | + sleep 5 2025-09-08 00:37:42.735306 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:42.770694 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:42.770783 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-08 00:37:42.770799 | orchestrator | + sleep 5 2025-09-08 00:37:47.774742 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-08 00:37:47.813682 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:47.813740 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-08 00:37:47.813754 | orchestrator | + local max_attempts=60 2025-09-08 00:37:47.813766 | orchestrator | + local name=kolla-ansible 2025-09-08 00:37:47.814104 | orchestrator | + local attempt_num=1 2025-09-08 00:37:47.814957 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-08 00:37:47.854771 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:47.854804 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-08 00:37:47.854815 | orchestrator | + local max_attempts=60 2025-09-08 00:37:47.854826 | orchestrator | + local name=osism-ansible 2025-09-08 00:37:47.854837 | orchestrator | + local attempt_num=1 2025-09-08 00:37:47.855636 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-08 00:37:47.896761 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-08 00:37:47.896803 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-08 00:37:47.896845 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-08 00:37:48.061357 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-08 00:37:48.200741 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-08 00:37:48.382308 | orchestrator | ARA in osism-ansible already disabled. 2025-09-08 00:37:48.578778 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-08 00:37:48.578870 | orchestrator | + osism apply gather-facts 2025-09-08 00:38:00.592367 | orchestrator | 2025-09-08 00:38:00 | INFO  | Task 07ee3371-73ff-48b1-a2c5-441a42cdb6dc (gather-facts) was prepared for execution. 2025-09-08 00:38:00.592491 | orchestrator | 2025-09-08 00:38:00 | INFO  | It takes a moment until task 07ee3371-73ff-48b1-a2c5-441a42cdb6dc (gather-facts) has been started and output is visible here. 2025-09-08 00:38:13.926360 | orchestrator | 2025-09-08 00:38:13.926495 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:38:13.926509 | orchestrator | 2025-09-08 00:38:13.926520 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:38:13.926551 | orchestrator | Monday 08 September 2025 00:38:04 +0000 (0:00:00.226) 0:00:00.226 ****** 2025-09-08 00:38:13.926561 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:38:13.926572 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:38:13.926582 | orchestrator | ok: [testbed-manager] 2025-09-08 00:38:13.926591 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:38:13.926600 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:38:13.926609 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:38:13.926617 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:38:13.926626 | orchestrator | 2025-09-08 00:38:13.926635 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-08 00:38:13.926644 | orchestrator | 2025-09-08 00:38:13.926653 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-08 00:38:13.926661 | orchestrator | Monday 08 September 2025 00:38:12 +0000 (0:00:08.287) 0:00:08.513 ****** 2025-09-08 00:38:13.926670 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:38:13.926680 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:38:13.926689 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:38:13.926697 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:38:13.926706 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:38:13.926714 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:38:13.926723 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:38:13.926731 | orchestrator | 2025-09-08 00:38:13.926740 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:38:13.926749 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:13.926760 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:13.926768 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:13.926777 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:13.926786 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:13.926795 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:13.926803 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:38:13.926812 | orchestrator | 2025-09-08 00:38:13.926821 | orchestrator | 2025-09-08 00:38:13.926829 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:38:13.926838 | orchestrator | Monday 08 September 2025 00:38:13 +0000 (0:00:00.541) 0:00:09.055 ****** 2025-09-08 00:38:13.926877 | orchestrator | =============================================================================== 2025-09-08 00:38:13.926888 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.29s 2025-09-08 00:38:13.926898 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-08 00:38:14.235848 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-08 00:38:14.249511 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-08 00:38:14.267991 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-08 00:38:14.286784 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-08 00:38:14.309346 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-08 00:38:14.331006 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-08 00:38:14.350394 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-08 00:38:14.369323 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-08 00:38:14.390123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-08 00:38:14.409630 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-08 00:38:14.424101 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-08 00:38:14.442734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-08 00:38:14.463631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-08 00:38:14.484281 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-08 00:38:14.506490 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-08 00:38:14.524726 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-08 00:38:14.545850 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-08 00:38:14.563184 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-08 00:38:14.577212 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-08 00:38:14.597877 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-08 00:38:14.612558 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-08 00:38:15.088215 | orchestrator | ok: Runtime: 0:23:19.800373 2025-09-08 00:38:15.185669 | 2025-09-08 00:38:15.185803 | TASK [Deploy services] 2025-09-08 00:38:15.717569 | orchestrator | skipping: Conditional result was False 2025-09-08 00:38:15.735537 | 2025-09-08 00:38:15.735725 | TASK [Deploy in a nutshell] 2025-09-08 00:38:16.448850 | orchestrator | 2025-09-08 00:38:16.449079 | orchestrator | # PULL IMAGES 2025-09-08 00:38:16.449102 | orchestrator | 2025-09-08 00:38:16.449116 | orchestrator | + set -e 2025-09-08 00:38:16.449134 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-08 00:38:16.449155 | orchestrator | ++ export INTERACTIVE=false 2025-09-08 00:38:16.449170 | orchestrator | ++ INTERACTIVE=false 2025-09-08 00:38:16.449213 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-08 00:38:16.449234 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-08 00:38:16.449248 | orchestrator | + source /opt/manager-vars.sh 2025-09-08 00:38:16.449259 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-08 00:38:16.449278 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-08 00:38:16.449289 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-08 00:38:16.449306 | orchestrator | ++ CEPH_VERSION=reef 2025-09-08 00:38:16.449318 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-08 00:38:16.449336 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-08 00:38:16.449347 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-08 00:38:16.449360 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-08 00:38:16.449371 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-08 00:38:16.449383 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-08 00:38:16.449394 | orchestrator | ++ export ARA=false 2025-09-08 00:38:16.449405 | orchestrator | ++ ARA=false 2025-09-08 00:38:16.449416 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-08 00:38:16.449426 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-08 00:38:16.449437 | orchestrator | ++ export TEMPEST=true 2025-09-08 00:38:16.449448 | orchestrator | ++ TEMPEST=true 2025-09-08 00:38:16.449458 | orchestrator | ++ export IS_ZUUL=true 2025-09-08 00:38:16.449469 | orchestrator | ++ IS_ZUUL=true 2025-09-08 00:38:16.449479 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-08 00:38:16.449491 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.173 2025-09-08 00:38:16.449502 | orchestrator | ++ export EXTERNAL_API=false 2025-09-08 00:38:16.449513 | orchestrator | ++ EXTERNAL_API=false 2025-09-08 00:38:16.449523 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-08 00:38:16.449535 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-08 00:38:16.449545 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-08 00:38:16.449556 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-08 00:38:16.449566 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-08 00:38:16.449583 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-08 00:38:16.449595 | orchestrator | + echo 2025-09-08 00:38:16.449606 | orchestrator | + echo '# PULL IMAGES' 2025-09-08 00:38:16.449616 | orchestrator | + echo 2025-09-08 00:38:16.449643 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-08 00:38:16.517504 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-08 00:38:16.517575 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-08 00:38:18.307349 | orchestrator | 2025-09-08 00:38:18 | INFO  | Trying to run play pull-images in environment custom 2025-09-08 00:38:28.406409 | orchestrator | 2025-09-08 00:38:28 | INFO  | Task 1b36fb35-74db-43f0-b31d-cb8841019e68 (pull-images) was prepared for execution. 2025-09-08 00:38:28.406537 | orchestrator | 2025-09-08 00:38:28 | INFO  | Task 1b36fb35-74db-43f0-b31d-cb8841019e68 is running in background. No more output. Check ARA for logs. 2025-09-08 00:38:30.659135 | orchestrator | 2025-09-08 00:38:30 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-08 00:38:40.801261 | orchestrator | 2025-09-08 00:38:40 | INFO  | Task 08798b19-cf40-4654-9b67-f30e03934452 (wipe-partitions) was prepared for execution. 2025-09-08 00:38:40.801410 | orchestrator | 2025-09-08 00:38:40 | INFO  | It takes a moment until task 08798b19-cf40-4654-9b67-f30e03934452 (wipe-partitions) has been started and output is visible here. 2025-09-08 00:38:53.455331 | orchestrator | 2025-09-08 00:38:53.455483 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-08 00:38:53.455501 | orchestrator | 2025-09-08 00:38:53.455514 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-08 00:38:53.455532 | orchestrator | Monday 08 September 2025 00:38:45 +0000 (0:00:00.144) 0:00:00.144 ****** 2025-09-08 00:38:53.455543 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:38:53.455556 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:38:53.455568 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:38:53.455579 | orchestrator | 2025-09-08 00:38:53.455591 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-08 00:38:53.455631 | orchestrator | Monday 08 September 2025 00:38:45 +0000 (0:00:00.580) 0:00:00.725 ****** 2025-09-08 00:38:53.455643 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:38:53.455654 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:38:53.455665 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:38:53.455681 | orchestrator | 2025-09-08 00:38:53.455693 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-08 00:38:53.455704 | orchestrator | Monday 08 September 2025 00:38:46 +0000 (0:00:00.232) 0:00:00.958 ****** 2025-09-08 00:38:53.455715 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:38:53.455728 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:38:53.455738 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:38:53.455749 | orchestrator | 2025-09-08 00:38:53.455761 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-08 00:38:53.455772 | orchestrator | Monday 08 September 2025 00:38:46 +0000 (0:00:00.747) 0:00:01.705 ****** 2025-09-08 00:38:53.455783 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:38:53.455796 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:38:53.455809 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:38:53.455822 | orchestrator | 2025-09-08 00:38:53.455834 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-08 00:38:53.455847 | orchestrator | Monday 08 September 2025 00:38:47 +0000 (0:00:00.254) 0:00:01.960 ****** 2025-09-08 00:38:53.455860 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-08 00:38:53.455877 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-08 00:38:53.455916 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-08 00:38:53.455930 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-08 00:38:53.455943 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-08 00:38:53.455955 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-08 00:38:53.455968 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-08 00:38:53.455981 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-08 00:38:53.455993 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-08 00:38:53.456006 | orchestrator | 2025-09-08 00:38:53.456031 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-08 00:38:53.456055 | orchestrator | Monday 08 September 2025 00:38:48 +0000 (0:00:01.191) 0:00:03.151 ****** 2025-09-08 00:38:53.456069 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-08 00:38:53.456082 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-08 00:38:53.456094 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-08 00:38:53.456107 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-08 00:38:53.456121 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-08 00:38:53.456135 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-08 00:38:53.456148 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-08 00:38:53.456159 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-08 00:38:53.456169 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-08 00:38:53.456180 | orchestrator | 2025-09-08 00:38:53.456191 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-08 00:38:53.456202 | orchestrator | Monday 08 September 2025 00:38:49 +0000 (0:00:01.347) 0:00:04.499 ****** 2025-09-08 00:38:53.456213 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-08 00:38:53.456224 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-08 00:38:53.456235 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-08 00:38:53.456246 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-08 00:38:53.456257 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-08 00:38:53.456268 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-08 00:38:53.456279 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-08 00:38:53.456290 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-08 00:38:53.456319 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-08 00:38:53.456331 | orchestrator | 2025-09-08 00:38:53.456342 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-08 00:38:53.456353 | orchestrator | Monday 08 September 2025 00:38:51 +0000 (0:00:02.317) 0:00:06.816 ****** 2025-09-08 00:38:53.456364 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:38:53.456376 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:38:53.456386 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:38:53.456397 | orchestrator | 2025-09-08 00:38:53.456408 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-08 00:38:53.456419 | orchestrator | Monday 08 September 2025 00:38:52 +0000 (0:00:00.600) 0:00:07.417 ****** 2025-09-08 00:38:53.456430 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:38:53.456441 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:38:53.456452 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:38:53.456463 | orchestrator | 2025-09-08 00:38:53.456474 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:38:53.456487 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:38:53.456500 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:38:53.456532 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:38:53.456544 | orchestrator | 2025-09-08 00:38:53.456555 | orchestrator | 2025-09-08 00:38:53.456566 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:38:53.456577 | orchestrator | Monday 08 September 2025 00:38:53 +0000 (0:00:00.624) 0:00:08.041 ****** 2025-09-08 00:38:53.456588 | orchestrator | =============================================================================== 2025-09-08 00:38:53.456599 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.32s 2025-09-08 00:38:53.456610 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.35s 2025-09-08 00:38:53.456621 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-09-08 00:38:53.456632 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.75s 2025-09-08 00:38:53.456643 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2025-09-08 00:38:53.456654 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2025-09-08 00:38:53.456665 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-09-08 00:38:53.456675 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-08 00:38:53.456686 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-09-08 00:39:05.803638 | orchestrator | 2025-09-08 00:39:05 | INFO  | Task 86da3731-42dd-4ef0-ab7b-3b77b7a0f16e (facts) was prepared for execution. 2025-09-08 00:39:05.803778 | orchestrator | 2025-09-08 00:39:05 | INFO  | It takes a moment until task 86da3731-42dd-4ef0-ab7b-3b77b7a0f16e (facts) has been started and output is visible here. 2025-09-08 00:39:18.381125 | orchestrator | 2025-09-08 00:39:18.381261 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-08 00:39:18.381278 | orchestrator | 2025-09-08 00:39:18.381291 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-08 00:39:18.381303 | orchestrator | Monday 08 September 2025 00:39:09 +0000 (0:00:00.274) 0:00:00.274 ****** 2025-09-08 00:39:18.381315 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:39:18.381327 | orchestrator | ok: [testbed-manager] 2025-09-08 00:39:18.381339 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:39:18.381350 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:39:18.381390 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:18.381401 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:18.381412 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:39:18.381423 | orchestrator | 2025-09-08 00:39:18.381434 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-08 00:39:18.381445 | orchestrator | Monday 08 September 2025 00:39:11 +0000 (0:00:01.110) 0:00:01.385 ****** 2025-09-08 00:39:18.381456 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:39:18.381468 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:39:18.381479 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:39:18.381490 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:39:18.381501 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:18.381512 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:18.381522 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:39:18.381533 | orchestrator | 2025-09-08 00:39:18.381544 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:39:18.381555 | orchestrator | 2025-09-08 00:39:18.381584 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:39:18.381596 | orchestrator | Monday 08 September 2025 00:39:12 +0000 (0:00:01.286) 0:00:02.671 ****** 2025-09-08 00:39:18.381607 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:39:18.381618 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:39:18.381629 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:39:18.381641 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:18.381653 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:18.381665 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:39:18.381679 | orchestrator | ok: [testbed-manager] 2025-09-08 00:39:18.381692 | orchestrator | 2025-09-08 00:39:18.381706 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-08 00:39:18.381719 | orchestrator | 2025-09-08 00:39:18.381732 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-08 00:39:18.381745 | orchestrator | Monday 08 September 2025 00:39:17 +0000 (0:00:05.132) 0:00:07.803 ****** 2025-09-08 00:39:18.381757 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:39:18.381770 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:39:18.381783 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:39:18.381795 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:39:18.381808 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:18.381820 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:18.381833 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:39:18.381846 | orchestrator | 2025-09-08 00:39:18.381859 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:39:18.381898 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:18.381913 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:18.381926 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:18.381939 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:18.381953 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:18.381966 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:18.381979 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:39:18.381992 | orchestrator | 2025-09-08 00:39:18.382006 | orchestrator | 2025-09-08 00:39:18.382072 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:39:18.382095 | orchestrator | Monday 08 September 2025 00:39:17 +0000 (0:00:00.515) 0:00:08.319 ****** 2025-09-08 00:39:18.382106 | orchestrator | =============================================================================== 2025-09-08 00:39:18.382118 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.13s 2025-09-08 00:39:18.382129 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2025-09-08 00:39:18.382141 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-09-08 00:39:18.382152 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-09-08 00:39:20.563204 | orchestrator | 2025-09-08 00:39:20 | INFO  | Task 6b36c3ca-893d-4063-91e3-0be69fc734b3 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-08 00:39:20.563319 | orchestrator | 2025-09-08 00:39:20 | INFO  | It takes a moment until task 6b36c3ca-893d-4063-91e3-0be69fc734b3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-08 00:39:32.297328 | orchestrator | 2025-09-08 00:39:32.297470 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-08 00:39:32.297488 | orchestrator | 2025-09-08 00:39:32.297501 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:39:32.297514 | orchestrator | Monday 08 September 2025 00:39:24 +0000 (0:00:00.302) 0:00:00.302 ****** 2025-09-08 00:39:32.297526 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:32.297538 | orchestrator | 2025-09-08 00:39:32.297549 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:39:32.297560 | orchestrator | Monday 08 September 2025 00:39:25 +0000 (0:00:00.222) 0:00:00.525 ****** 2025-09-08 00:39:32.297572 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:32.297584 | orchestrator | 2025-09-08 00:39:32.297595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.297606 | orchestrator | Monday 08 September 2025 00:39:25 +0000 (0:00:00.204) 0:00:00.729 ****** 2025-09-08 00:39:32.297617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:39:32.297629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:39:32.297652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:39:32.297665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:39:32.297676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:39:32.297687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:39:32.297697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:39:32.297708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:39:32.297719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-08 00:39:32.297730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:39:32.297741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:39:32.297752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:39:32.297762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:39:32.297773 | orchestrator | 2025-09-08 00:39:32.297784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.297794 | orchestrator | Monday 08 September 2025 00:39:25 +0000 (0:00:00.328) 0:00:01.057 ****** 2025-09-08 00:39:32.297806 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.297817 | orchestrator | 2025-09-08 00:39:32.297880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.297893 | orchestrator | Monday 08 September 2025 00:39:25 +0000 (0:00:00.374) 0:00:01.432 ****** 2025-09-08 00:39:32.297904 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.297915 | orchestrator | 2025-09-08 00:39:32.297926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.297936 | orchestrator | Monday 08 September 2025 00:39:26 +0000 (0:00:00.189) 0:00:01.622 ****** 2025-09-08 00:39:32.297947 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.297958 | orchestrator | 2025-09-08 00:39:32.297969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.297980 | orchestrator | Monday 08 September 2025 00:39:26 +0000 (0:00:00.174) 0:00:01.796 ****** 2025-09-08 00:39:32.297991 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298001 | orchestrator | 2025-09-08 00:39:32.298079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298093 | orchestrator | Monday 08 September 2025 00:39:26 +0000 (0:00:00.186) 0:00:01.983 ****** 2025-09-08 00:39:32.298104 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298114 | orchestrator | 2025-09-08 00:39:32.298125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298136 | orchestrator | Monday 08 September 2025 00:39:26 +0000 (0:00:00.190) 0:00:02.173 ****** 2025-09-08 00:39:32.298147 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298158 | orchestrator | 2025-09-08 00:39:32.298169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298179 | orchestrator | Monday 08 September 2025 00:39:26 +0000 (0:00:00.200) 0:00:02.374 ****** 2025-09-08 00:39:32.298190 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298201 | orchestrator | 2025-09-08 00:39:32.298211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298222 | orchestrator | Monday 08 September 2025 00:39:27 +0000 (0:00:00.195) 0:00:02.569 ****** 2025-09-08 00:39:32.298233 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298243 | orchestrator | 2025-09-08 00:39:32.298254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298264 | orchestrator | Monday 08 September 2025 00:39:27 +0000 (0:00:00.191) 0:00:02.761 ****** 2025-09-08 00:39:32.298275 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e) 2025-09-08 00:39:32.298287 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e) 2025-09-08 00:39:32.298298 | orchestrator | 2025-09-08 00:39:32.298309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298319 | orchestrator | Monday 08 September 2025 00:39:27 +0000 (0:00:00.406) 0:00:03.168 ****** 2025-09-08 00:39:32.298348 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb) 2025-09-08 00:39:32.298360 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb) 2025-09-08 00:39:32.298371 | orchestrator | 2025-09-08 00:39:32.298382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298399 | orchestrator | Monday 08 September 2025 00:39:28 +0000 (0:00:00.412) 0:00:03.580 ****** 2025-09-08 00:39:32.298411 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5) 2025-09-08 00:39:32.298421 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5) 2025-09-08 00:39:32.298432 | orchestrator | 2025-09-08 00:39:32.298443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298454 | orchestrator | Monday 08 September 2025 00:39:28 +0000 (0:00:00.640) 0:00:04.220 ****** 2025-09-08 00:39:32.298464 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3) 2025-09-08 00:39:32.298484 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3) 2025-09-08 00:39:32.298495 | orchestrator | 2025-09-08 00:39:32.298506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:32.298517 | orchestrator | Monday 08 September 2025 00:39:29 +0000 (0:00:00.647) 0:00:04.867 ****** 2025-09-08 00:39:32.298528 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:39:32.298539 | orchestrator | 2025-09-08 00:39:32.298550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.298561 | orchestrator | Monday 08 September 2025 00:39:30 +0000 (0:00:00.742) 0:00:05.610 ****** 2025-09-08 00:39:32.298571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:39:32.298582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:39:32.298592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:39:32.298603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:39:32.298614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:39:32.298625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:39:32.298635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:39:32.298646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:39:32.298656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-08 00:39:32.298667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:39:32.298678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:39:32.298689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:39:32.298699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:39:32.298710 | orchestrator | 2025-09-08 00:39:32.298721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.298732 | orchestrator | Monday 08 September 2025 00:39:30 +0000 (0:00:00.413) 0:00:06.023 ****** 2025-09-08 00:39:32.298742 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298753 | orchestrator | 2025-09-08 00:39:32.298764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.298775 | orchestrator | Monday 08 September 2025 00:39:30 +0000 (0:00:00.190) 0:00:06.214 ****** 2025-09-08 00:39:32.298785 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298796 | orchestrator | 2025-09-08 00:39:32.298807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.298818 | orchestrator | Monday 08 September 2025 00:39:31 +0000 (0:00:00.274) 0:00:06.488 ****** 2025-09-08 00:39:32.298828 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298839 | orchestrator | 2025-09-08 00:39:32.298867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.298878 | orchestrator | Monday 08 September 2025 00:39:31 +0000 (0:00:00.207) 0:00:06.696 ****** 2025-09-08 00:39:32.298889 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298900 | orchestrator | 2025-09-08 00:39:32.298911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.298922 | orchestrator | Monday 08 September 2025 00:39:31 +0000 (0:00:00.217) 0:00:06.913 ****** 2025-09-08 00:39:32.298932 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298943 | orchestrator | 2025-09-08 00:39:32.298954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.298971 | orchestrator | Monday 08 September 2025 00:39:31 +0000 (0:00:00.199) 0:00:07.113 ****** 2025-09-08 00:39:32.298982 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.298993 | orchestrator | 2025-09-08 00:39:32.299004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.299015 | orchestrator | Monday 08 September 2025 00:39:31 +0000 (0:00:00.207) 0:00:07.321 ****** 2025-09-08 00:39:32.299025 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:32.299036 | orchestrator | 2025-09-08 00:39:32.299047 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:32.299058 | orchestrator | Monday 08 September 2025 00:39:32 +0000 (0:00:00.224) 0:00:07.545 ****** 2025-09-08 00:39:32.299075 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.917427 | orchestrator | 2025-09-08 00:39:39.917562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:39.917580 | orchestrator | Monday 08 September 2025 00:39:32 +0000 (0:00:00.205) 0:00:07.750 ****** 2025-09-08 00:39:39.917591 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-08 00:39:39.917604 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-08 00:39:39.917616 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-08 00:39:39.917627 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-08 00:39:39.917638 | orchestrator | 2025-09-08 00:39:39.917649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:39.917682 | orchestrator | Monday 08 September 2025 00:39:33 +0000 (0:00:01.001) 0:00:08.752 ****** 2025-09-08 00:39:39.917693 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.917704 | orchestrator | 2025-09-08 00:39:39.917715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:39.917726 | orchestrator | Monday 08 September 2025 00:39:33 +0000 (0:00:00.200) 0:00:08.953 ****** 2025-09-08 00:39:39.917737 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.917747 | orchestrator | 2025-09-08 00:39:39.917759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:39.917769 | orchestrator | Monday 08 September 2025 00:39:33 +0000 (0:00:00.214) 0:00:09.167 ****** 2025-09-08 00:39:39.917780 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.917791 | orchestrator | 2025-09-08 00:39:39.917802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:39.917813 | orchestrator | Monday 08 September 2025 00:39:33 +0000 (0:00:00.198) 0:00:09.366 ****** 2025-09-08 00:39:39.917823 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.917834 | orchestrator | 2025-09-08 00:39:39.917871 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-08 00:39:39.917882 | orchestrator | Monday 08 September 2025 00:39:34 +0000 (0:00:00.213) 0:00:09.579 ****** 2025-09-08 00:39:39.917893 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-08 00:39:39.917904 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-08 00:39:39.917915 | orchestrator | 2025-09-08 00:39:39.917926 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-08 00:39:39.917936 | orchestrator | Monday 08 September 2025 00:39:34 +0000 (0:00:00.172) 0:00:09.752 ****** 2025-09-08 00:39:39.917947 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.917958 | orchestrator | 2025-09-08 00:39:39.917969 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-08 00:39:39.917980 | orchestrator | Monday 08 September 2025 00:39:34 +0000 (0:00:00.138) 0:00:09.891 ****** 2025-09-08 00:39:39.917991 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918001 | orchestrator | 2025-09-08 00:39:39.918012 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-08 00:39:39.918088 | orchestrator | Monday 08 September 2025 00:39:34 +0000 (0:00:00.145) 0:00:10.037 ****** 2025-09-08 00:39:39.918099 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918110 | orchestrator | 2025-09-08 00:39:39.918145 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-08 00:39:39.918157 | orchestrator | Monday 08 September 2025 00:39:34 +0000 (0:00:00.151) 0:00:10.189 ****** 2025-09-08 00:39:39.918168 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:39.918178 | orchestrator | 2025-09-08 00:39:39.918189 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-08 00:39:39.918200 | orchestrator | Monday 08 September 2025 00:39:34 +0000 (0:00:00.131) 0:00:10.320 ****** 2025-09-08 00:39:39.918212 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6245231a-5e27-588f-a545-a88193777b58'}}) 2025-09-08 00:39:39.918223 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}}) 2025-09-08 00:39:39.918234 | orchestrator | 2025-09-08 00:39:39.918244 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-08 00:39:39.918255 | orchestrator | Monday 08 September 2025 00:39:35 +0000 (0:00:00.174) 0:00:10.495 ****** 2025-09-08 00:39:39.918266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6245231a-5e27-588f-a545-a88193777b58'}})  2025-09-08 00:39:39.918288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}})  2025-09-08 00:39:39.918299 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918310 | orchestrator | 2025-09-08 00:39:39.918321 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-08 00:39:39.918332 | orchestrator | Monday 08 September 2025 00:39:35 +0000 (0:00:00.163) 0:00:10.658 ****** 2025-09-08 00:39:39.918343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6245231a-5e27-588f-a545-a88193777b58'}})  2025-09-08 00:39:39.918354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}})  2025-09-08 00:39:39.918365 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918376 | orchestrator | 2025-09-08 00:39:39.918387 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-08 00:39:39.918397 | orchestrator | Monday 08 September 2025 00:39:35 +0000 (0:00:00.154) 0:00:10.813 ****** 2025-09-08 00:39:39.918408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6245231a-5e27-588f-a545-a88193777b58'}})  2025-09-08 00:39:39.918419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}})  2025-09-08 00:39:39.918430 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918441 | orchestrator | 2025-09-08 00:39:39.918470 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-08 00:39:39.918481 | orchestrator | Monday 08 September 2025 00:39:35 +0000 (0:00:00.358) 0:00:11.171 ****** 2025-09-08 00:39:39.918492 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:39.918503 | orchestrator | 2025-09-08 00:39:39.918514 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-08 00:39:39.918525 | orchestrator | Monday 08 September 2025 00:39:35 +0000 (0:00:00.140) 0:00:11.312 ****** 2025-09-08 00:39:39.918536 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:39:39.918546 | orchestrator | 2025-09-08 00:39:39.918557 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-08 00:39:39.918568 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.153) 0:00:11.465 ****** 2025-09-08 00:39:39.918579 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918590 | orchestrator | 2025-09-08 00:39:39.918600 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-08 00:39:39.918611 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.141) 0:00:11.607 ****** 2025-09-08 00:39:39.918622 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918633 | orchestrator | 2025-09-08 00:39:39.918644 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-08 00:39:39.918662 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.140) 0:00:11.748 ****** 2025-09-08 00:39:39.918673 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918684 | orchestrator | 2025-09-08 00:39:39.918694 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-08 00:39:39.918705 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.139) 0:00:11.887 ****** 2025-09-08 00:39:39.918716 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:39:39.918727 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:39:39.918738 | orchestrator |  "sdb": { 2025-09-08 00:39:39.918748 | orchestrator |  "osd_lvm_uuid": "6245231a-5e27-588f-a545-a88193777b58" 2025-09-08 00:39:39.918759 | orchestrator |  }, 2025-09-08 00:39:39.918770 | orchestrator |  "sdc": { 2025-09-08 00:39:39.918781 | orchestrator |  "osd_lvm_uuid": "7231c7d5-5dfe-5215-9efd-b7a5c24f93db" 2025-09-08 00:39:39.918792 | orchestrator |  } 2025-09-08 00:39:39.918802 | orchestrator |  } 2025-09-08 00:39:39.918814 | orchestrator | } 2025-09-08 00:39:39.918824 | orchestrator | 2025-09-08 00:39:39.918835 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-08 00:39:39.918873 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.143) 0:00:12.031 ****** 2025-09-08 00:39:39.918884 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918895 | orchestrator | 2025-09-08 00:39:39.918906 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-08 00:39:39.918917 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.138) 0:00:12.169 ****** 2025-09-08 00:39:39.918933 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918944 | orchestrator | 2025-09-08 00:39:39.918955 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-08 00:39:39.918966 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.155) 0:00:12.325 ****** 2025-09-08 00:39:39.918977 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:39:39.918988 | orchestrator | 2025-09-08 00:39:39.918998 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-08 00:39:39.919009 | orchestrator | Monday 08 September 2025 00:39:36 +0000 (0:00:00.133) 0:00:12.458 ****** 2025-09-08 00:39:39.919020 | orchestrator | changed: [testbed-node-3] => { 2025-09-08 00:39:39.919031 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-08 00:39:39.919041 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:39:39.919052 | orchestrator |  "sdb": { 2025-09-08 00:39:39.919063 | orchestrator |  "osd_lvm_uuid": "6245231a-5e27-588f-a545-a88193777b58" 2025-09-08 00:39:39.919074 | orchestrator |  }, 2025-09-08 00:39:39.919085 | orchestrator |  "sdc": { 2025-09-08 00:39:39.919096 | orchestrator |  "osd_lvm_uuid": "7231c7d5-5dfe-5215-9efd-b7a5c24f93db" 2025-09-08 00:39:39.919106 | orchestrator |  } 2025-09-08 00:39:39.919117 | orchestrator |  }, 2025-09-08 00:39:39.919128 | orchestrator |  "lvm_volumes": [ 2025-09-08 00:39:39.919139 | orchestrator |  { 2025-09-08 00:39:39.919149 | orchestrator |  "data": "osd-block-6245231a-5e27-588f-a545-a88193777b58", 2025-09-08 00:39:39.919160 | orchestrator |  "data_vg": "ceph-6245231a-5e27-588f-a545-a88193777b58" 2025-09-08 00:39:39.919171 | orchestrator |  }, 2025-09-08 00:39:39.919182 | orchestrator |  { 2025-09-08 00:39:39.919192 | orchestrator |  "data": "osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db", 2025-09-08 00:39:39.919203 | orchestrator |  "data_vg": "ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db" 2025-09-08 00:39:39.919214 | orchestrator |  } 2025-09-08 00:39:39.919224 | orchestrator |  ] 2025-09-08 00:39:39.919235 | orchestrator |  } 2025-09-08 00:39:39.919246 | orchestrator | } 2025-09-08 00:39:39.919256 | orchestrator | 2025-09-08 00:39:39.919267 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-08 00:39:39.919278 | orchestrator | Monday 08 September 2025 00:39:37 +0000 (0:00:00.184) 0:00:12.643 ****** 2025-09-08 00:39:39.919297 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:39.919308 | orchestrator | 2025-09-08 00:39:39.919318 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-08 00:39:39.919329 | orchestrator | 2025-09-08 00:39:39.919340 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:39:39.919350 | orchestrator | Monday 08 September 2025 00:39:39 +0000 (0:00:02.221) 0:00:14.864 ****** 2025-09-08 00:39:39.919361 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:39.919373 | orchestrator | 2025-09-08 00:39:39.919383 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:39:39.919394 | orchestrator | Monday 08 September 2025 00:39:39 +0000 (0:00:00.261) 0:00:15.125 ****** 2025-09-08 00:39:39.919405 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:39.919415 | orchestrator | 2025-09-08 00:39:39.919426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:39.919444 | orchestrator | Monday 08 September 2025 00:39:39 +0000 (0:00:00.245) 0:00:15.371 ****** 2025-09-08 00:39:47.983327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:39:47.983462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:39:47.983478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:39:47.983489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:39:47.983501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:39:47.983512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:39:47.983523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:39:47.983533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:39:47.983544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-08 00:39:47.983556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:39:47.983590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:39:47.983601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:39:47.983612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:39:47.983623 | orchestrator | 2025-09-08 00:39:47.983640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.983653 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.409) 0:00:15.780 ****** 2025-09-08 00:39:47.983665 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.983677 | orchestrator | 2025-09-08 00:39:47.983688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.983699 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.207) 0:00:15.987 ****** 2025-09-08 00:39:47.983710 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.983721 | orchestrator | 2025-09-08 00:39:47.983732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.983742 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.213) 0:00:16.201 ****** 2025-09-08 00:39:47.983753 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.983764 | orchestrator | 2025-09-08 00:39:47.983775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.983786 | orchestrator | Monday 08 September 2025 00:39:40 +0000 (0:00:00.198) 0:00:16.400 ****** 2025-09-08 00:39:47.983796 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.983807 | orchestrator | 2025-09-08 00:39:47.983871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.983887 | orchestrator | Monday 08 September 2025 00:39:41 +0000 (0:00:00.198) 0:00:16.598 ****** 2025-09-08 00:39:47.983900 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.983912 | orchestrator | 2025-09-08 00:39:47.983925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.983938 | orchestrator | Monday 08 September 2025 00:39:41 +0000 (0:00:00.203) 0:00:16.802 ****** 2025-09-08 00:39:47.983950 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.983963 | orchestrator | 2025-09-08 00:39:47.983976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.983989 | orchestrator | Monday 08 September 2025 00:39:41 +0000 (0:00:00.596) 0:00:17.398 ****** 2025-09-08 00:39:47.984001 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984013 | orchestrator | 2025-09-08 00:39:47.984026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.984038 | orchestrator | Monday 08 September 2025 00:39:42 +0000 (0:00:00.204) 0:00:17.602 ****** 2025-09-08 00:39:47.984050 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984064 | orchestrator | 2025-09-08 00:39:47.984077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.984089 | orchestrator | Monday 08 September 2025 00:39:42 +0000 (0:00:00.191) 0:00:17.794 ****** 2025-09-08 00:39:47.984101 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9) 2025-09-08 00:39:47.984115 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9) 2025-09-08 00:39:47.984128 | orchestrator | 2025-09-08 00:39:47.984140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.984153 | orchestrator | Monday 08 September 2025 00:39:42 +0000 (0:00:00.439) 0:00:18.233 ****** 2025-09-08 00:39:47.984165 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359) 2025-09-08 00:39:47.984178 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359) 2025-09-08 00:39:47.984190 | orchestrator | 2025-09-08 00:39:47.984203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.984214 | orchestrator | Monday 08 September 2025 00:39:43 +0000 (0:00:00.421) 0:00:18.654 ****** 2025-09-08 00:39:47.984225 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98) 2025-09-08 00:39:47.984235 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98) 2025-09-08 00:39:47.984246 | orchestrator | 2025-09-08 00:39:47.984257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.984267 | orchestrator | Monday 08 September 2025 00:39:43 +0000 (0:00:00.419) 0:00:19.073 ****** 2025-09-08 00:39:47.984296 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72) 2025-09-08 00:39:47.984308 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72) 2025-09-08 00:39:47.984319 | orchestrator | 2025-09-08 00:39:47.984329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:47.984340 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.476) 0:00:19.550 ****** 2025-09-08 00:39:47.984351 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:39:47.984362 | orchestrator | 2025-09-08 00:39:47.984372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984389 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.358) 0:00:19.908 ****** 2025-09-08 00:39:47.984400 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:39:47.984411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:39:47.984429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:39:47.984440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:39:47.984451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:39:47.984461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:39:47.984472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:39:47.984482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:39:47.984493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-08 00:39:47.984503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:39:47.984514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:39:47.984525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:39:47.984535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:39:47.984546 | orchestrator | 2025-09-08 00:39:47.984556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984567 | orchestrator | Monday 08 September 2025 00:39:44 +0000 (0:00:00.395) 0:00:20.304 ****** 2025-09-08 00:39:47.984578 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984588 | orchestrator | 2025-09-08 00:39:47.984599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984610 | orchestrator | Monday 08 September 2025 00:39:45 +0000 (0:00:00.211) 0:00:20.515 ****** 2025-09-08 00:39:47.984620 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984631 | orchestrator | 2025-09-08 00:39:47.984642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984653 | orchestrator | Monday 08 September 2025 00:39:45 +0000 (0:00:00.705) 0:00:21.221 ****** 2025-09-08 00:39:47.984663 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984674 | orchestrator | 2025-09-08 00:39:47.984685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984695 | orchestrator | Monday 08 September 2025 00:39:45 +0000 (0:00:00.211) 0:00:21.433 ****** 2025-09-08 00:39:47.984706 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984717 | orchestrator | 2025-09-08 00:39:47.984727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984738 | orchestrator | Monday 08 September 2025 00:39:46 +0000 (0:00:00.245) 0:00:21.678 ****** 2025-09-08 00:39:47.984749 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984759 | orchestrator | 2025-09-08 00:39:47.984770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984781 | orchestrator | Monday 08 September 2025 00:39:46 +0000 (0:00:00.217) 0:00:21.896 ****** 2025-09-08 00:39:47.984791 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984802 | orchestrator | 2025-09-08 00:39:47.984813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984824 | orchestrator | Monday 08 September 2025 00:39:46 +0000 (0:00:00.213) 0:00:22.109 ****** 2025-09-08 00:39:47.984849 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984860 | orchestrator | 2025-09-08 00:39:47.984871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984882 | orchestrator | Monday 08 September 2025 00:39:46 +0000 (0:00:00.212) 0:00:22.322 ****** 2025-09-08 00:39:47.984893 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.984904 | orchestrator | 2025-09-08 00:39:47.984914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.984933 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.219) 0:00:22.542 ****** 2025-09-08 00:39:47.984943 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-08 00:39:47.984955 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-08 00:39:47.984966 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-08 00:39:47.984977 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-08 00:39:47.984988 | orchestrator | 2025-09-08 00:39:47.984998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:47.985009 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.705) 0:00:23.247 ****** 2025-09-08 00:39:47.985020 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:47.985031 | orchestrator | 2025-09-08 00:39:47.985048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:54.818610 | orchestrator | Monday 08 September 2025 00:39:47 +0000 (0:00:00.190) 0:00:23.438 ****** 2025-09-08 00:39:54.818732 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.818747 | orchestrator | 2025-09-08 00:39:54.818757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:54.818767 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.179) 0:00:23.618 ****** 2025-09-08 00:39:54.818776 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.818785 | orchestrator | 2025-09-08 00:39:54.818794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:39:54.818803 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.187) 0:00:23.806 ****** 2025-09-08 00:39:54.818812 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.818862 | orchestrator | 2025-09-08 00:39:54.818893 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-08 00:39:54.818903 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.185) 0:00:23.991 ****** 2025-09-08 00:39:54.818912 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-08 00:39:54.818921 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-08 00:39:54.818929 | orchestrator | 2025-09-08 00:39:54.818938 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-08 00:39:54.818947 | orchestrator | Monday 08 September 2025 00:39:48 +0000 (0:00:00.354) 0:00:24.346 ****** 2025-09-08 00:39:54.818955 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.818964 | orchestrator | 2025-09-08 00:39:54.818973 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-08 00:39:54.818982 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.123) 0:00:24.469 ****** 2025-09-08 00:39:54.818991 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819000 | orchestrator | 2025-09-08 00:39:54.819008 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-08 00:39:54.819017 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.129) 0:00:24.599 ****** 2025-09-08 00:39:54.819025 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819034 | orchestrator | 2025-09-08 00:39:54.819043 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-08 00:39:54.819051 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.134) 0:00:24.733 ****** 2025-09-08 00:39:54.819060 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:54.819069 | orchestrator | 2025-09-08 00:39:54.819078 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-08 00:39:54.819086 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.134) 0:00:24.868 ****** 2025-09-08 00:39:54.819096 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}}) 2025-09-08 00:39:54.819106 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e84ec590-0593-5433-8536-9c5125166743'}}) 2025-09-08 00:39:54.819114 | orchestrator | 2025-09-08 00:39:54.819123 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-08 00:39:54.819158 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.181) 0:00:25.050 ****** 2025-09-08 00:39:54.819170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}})  2025-09-08 00:39:54.819182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e84ec590-0593-5433-8536-9c5125166743'}})  2025-09-08 00:39:54.819192 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819203 | orchestrator | 2025-09-08 00:39:54.819213 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-08 00:39:54.819224 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.143) 0:00:25.193 ****** 2025-09-08 00:39:54.819234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}})  2025-09-08 00:39:54.819244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e84ec590-0593-5433-8536-9c5125166743'}})  2025-09-08 00:39:54.819254 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819264 | orchestrator | 2025-09-08 00:39:54.819274 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-08 00:39:54.819285 | orchestrator | Monday 08 September 2025 00:39:49 +0000 (0:00:00.171) 0:00:25.364 ****** 2025-09-08 00:39:54.819295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}})  2025-09-08 00:39:54.819305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e84ec590-0593-5433-8536-9c5125166743'}})  2025-09-08 00:39:54.819315 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819326 | orchestrator | 2025-09-08 00:39:54.819336 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-08 00:39:54.819346 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.168) 0:00:25.532 ****** 2025-09-08 00:39:54.819357 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:54.819368 | orchestrator | 2025-09-08 00:39:54.819378 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-08 00:39:54.819388 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.182) 0:00:25.715 ****** 2025-09-08 00:39:54.819399 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:39:54.819409 | orchestrator | 2025-09-08 00:39:54.819420 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-08 00:39:54.819431 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.168) 0:00:25.884 ****** 2025-09-08 00:39:54.819441 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819451 | orchestrator | 2025-09-08 00:39:54.819478 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-08 00:39:54.819489 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.136) 0:00:26.020 ****** 2025-09-08 00:39:54.819497 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819506 | orchestrator | 2025-09-08 00:39:54.819515 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-08 00:39:54.819523 | orchestrator | Monday 08 September 2025 00:39:50 +0000 (0:00:00.373) 0:00:26.394 ****** 2025-09-08 00:39:54.819532 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819540 | orchestrator | 2025-09-08 00:39:54.819549 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-08 00:39:54.819557 | orchestrator | Monday 08 September 2025 00:39:51 +0000 (0:00:00.146) 0:00:26.541 ****** 2025-09-08 00:39:54.819566 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:39:54.819575 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:39:54.819583 | orchestrator |  "sdb": { 2025-09-08 00:39:54.819592 | orchestrator |  "osd_lvm_uuid": "39881e3d-2712-5fd1-9b8f-3e1ed3474a2a" 2025-09-08 00:39:54.819601 | orchestrator |  }, 2025-09-08 00:39:54.819609 | orchestrator |  "sdc": { 2025-09-08 00:39:54.819618 | orchestrator |  "osd_lvm_uuid": "e84ec590-0593-5433-8536-9c5125166743" 2025-09-08 00:39:54.819633 | orchestrator |  } 2025-09-08 00:39:54.819642 | orchestrator |  } 2025-09-08 00:39:54.819651 | orchestrator | } 2025-09-08 00:39:54.819660 | orchestrator | 2025-09-08 00:39:54.819668 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-08 00:39:54.819677 | orchestrator | Monday 08 September 2025 00:39:51 +0000 (0:00:00.143) 0:00:26.684 ****** 2025-09-08 00:39:54.819685 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819694 | orchestrator | 2025-09-08 00:39:54.819707 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-08 00:39:54.819716 | orchestrator | Monday 08 September 2025 00:39:51 +0000 (0:00:00.124) 0:00:26.808 ****** 2025-09-08 00:39:54.819725 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819733 | orchestrator | 2025-09-08 00:39:54.819742 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-08 00:39:54.819750 | orchestrator | Monday 08 September 2025 00:39:51 +0000 (0:00:00.153) 0:00:26.962 ****** 2025-09-08 00:39:54.819759 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:39:54.819768 | orchestrator | 2025-09-08 00:39:54.819776 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-08 00:39:54.819785 | orchestrator | Monday 08 September 2025 00:39:51 +0000 (0:00:00.126) 0:00:27.089 ****** 2025-09-08 00:39:54.819793 | orchestrator | changed: [testbed-node-4] => { 2025-09-08 00:39:54.819802 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-08 00:39:54.819810 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:39:54.819819 | orchestrator |  "sdb": { 2025-09-08 00:39:54.819845 | orchestrator |  "osd_lvm_uuid": "39881e3d-2712-5fd1-9b8f-3e1ed3474a2a" 2025-09-08 00:39:54.819854 | orchestrator |  }, 2025-09-08 00:39:54.819867 | orchestrator |  "sdc": { 2025-09-08 00:39:54.819876 | orchestrator |  "osd_lvm_uuid": "e84ec590-0593-5433-8536-9c5125166743" 2025-09-08 00:39:54.819885 | orchestrator |  } 2025-09-08 00:39:54.819893 | orchestrator |  }, 2025-09-08 00:39:54.819902 | orchestrator |  "lvm_volumes": [ 2025-09-08 00:39:54.819911 | orchestrator |  { 2025-09-08 00:39:54.819919 | orchestrator |  "data": "osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a", 2025-09-08 00:39:54.819928 | orchestrator |  "data_vg": "ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a" 2025-09-08 00:39:54.819937 | orchestrator |  }, 2025-09-08 00:39:54.819945 | orchestrator |  { 2025-09-08 00:39:54.819954 | orchestrator |  "data": "osd-block-e84ec590-0593-5433-8536-9c5125166743", 2025-09-08 00:39:54.819962 | orchestrator |  "data_vg": "ceph-e84ec590-0593-5433-8536-9c5125166743" 2025-09-08 00:39:54.819971 | orchestrator |  } 2025-09-08 00:39:54.819980 | orchestrator |  ] 2025-09-08 00:39:54.819988 | orchestrator |  } 2025-09-08 00:39:54.819997 | orchestrator | } 2025-09-08 00:39:54.820005 | orchestrator | 2025-09-08 00:39:54.820014 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-08 00:39:54.820022 | orchestrator | Monday 08 September 2025 00:39:51 +0000 (0:00:00.200) 0:00:27.290 ****** 2025-09-08 00:39:54.820031 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:54.820039 | orchestrator | 2025-09-08 00:39:54.820048 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-08 00:39:54.820057 | orchestrator | 2025-09-08 00:39:54.820065 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:39:54.820074 | orchestrator | Monday 08 September 2025 00:39:53 +0000 (0:00:01.236) 0:00:28.526 ****** 2025-09-08 00:39:54.820082 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-08 00:39:54.820091 | orchestrator | 2025-09-08 00:39:54.820100 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:39:54.820108 | orchestrator | Monday 08 September 2025 00:39:53 +0000 (0:00:00.485) 0:00:29.012 ****** 2025-09-08 00:39:54.820117 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:39:54.820132 | orchestrator | 2025-09-08 00:39:54.820141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:39:54.820149 | orchestrator | Monday 08 September 2025 00:39:54 +0000 (0:00:00.692) 0:00:29.704 ****** 2025-09-08 00:39:54.820158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:39:54.820166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:39:54.820175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:39:54.820183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:39:54.820192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:39:54.820200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:39:54.820215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:40:03.809795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:40:03.809978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-08 00:40:03.809996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:40:03.810008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:40:03.810090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:40:03.810103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:40:03.810115 | orchestrator | 2025-09-08 00:40:03.810127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810140 | orchestrator | Monday 08 September 2025 00:39:54 +0000 (0:00:00.564) 0:00:30.269 ****** 2025-09-08 00:40:03.810152 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810164 | orchestrator | 2025-09-08 00:40:03.810176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810187 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.211) 0:00:30.480 ****** 2025-09-08 00:40:03.810198 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810209 | orchestrator | 2025-09-08 00:40:03.810220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810231 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.207) 0:00:30.687 ****** 2025-09-08 00:40:03.810242 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810253 | orchestrator | 2025-09-08 00:40:03.810264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810275 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.212) 0:00:30.900 ****** 2025-09-08 00:40:03.810286 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810297 | orchestrator | 2025-09-08 00:40:03.810307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810318 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.198) 0:00:31.098 ****** 2025-09-08 00:40:03.810329 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810340 | orchestrator | 2025-09-08 00:40:03.810351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810362 | orchestrator | Monday 08 September 2025 00:39:55 +0000 (0:00:00.237) 0:00:31.336 ****** 2025-09-08 00:40:03.810373 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810384 | orchestrator | 2025-09-08 00:40:03.810395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810406 | orchestrator | Monday 08 September 2025 00:39:56 +0000 (0:00:00.194) 0:00:31.531 ****** 2025-09-08 00:40:03.810417 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810428 | orchestrator | 2025-09-08 00:40:03.810469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810481 | orchestrator | Monday 08 September 2025 00:39:56 +0000 (0:00:00.229) 0:00:31.760 ****** 2025-09-08 00:40:03.810492 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.810502 | orchestrator | 2025-09-08 00:40:03.810535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810547 | orchestrator | Monday 08 September 2025 00:39:56 +0000 (0:00:00.262) 0:00:32.023 ****** 2025-09-08 00:40:03.810558 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44) 2025-09-08 00:40:03.810571 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44) 2025-09-08 00:40:03.810582 | orchestrator | 2025-09-08 00:40:03.810593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810604 | orchestrator | Monday 08 September 2025 00:39:57 +0000 (0:00:00.630) 0:00:32.653 ****** 2025-09-08 00:40:03.810614 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e) 2025-09-08 00:40:03.810625 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e) 2025-09-08 00:40:03.810636 | orchestrator | 2025-09-08 00:40:03.810647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810658 | orchestrator | Monday 08 September 2025 00:39:58 +0000 (0:00:00.873) 0:00:33.527 ****** 2025-09-08 00:40:03.810668 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d) 2025-09-08 00:40:03.810679 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d) 2025-09-08 00:40:03.810690 | orchestrator | 2025-09-08 00:40:03.810701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810712 | orchestrator | Monday 08 September 2025 00:39:58 +0000 (0:00:00.447) 0:00:33.975 ****** 2025-09-08 00:40:03.810722 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962) 2025-09-08 00:40:03.810733 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962) 2025-09-08 00:40:03.810744 | orchestrator | 2025-09-08 00:40:03.810755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:40:03.810766 | orchestrator | Monday 08 September 2025 00:39:59 +0000 (0:00:00.503) 0:00:34.478 ****** 2025-09-08 00:40:03.810776 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:40:03.810787 | orchestrator | 2025-09-08 00:40:03.810798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.810809 | orchestrator | Monday 08 September 2025 00:39:59 +0000 (0:00:00.471) 0:00:34.949 ****** 2025-09-08 00:40:03.810857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:40:03.810870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:40:03.810880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:40:03.810891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:40:03.810902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:40:03.810913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:40:03.810924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:40:03.810934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:40:03.810945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-08 00:40:03.810966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:40:03.810977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:40:03.810988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:40:03.810998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:40:03.811009 | orchestrator | 2025-09-08 00:40:03.811020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811031 | orchestrator | Monday 08 September 2025 00:39:59 +0000 (0:00:00.483) 0:00:35.433 ****** 2025-09-08 00:40:03.811042 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811053 | orchestrator | 2025-09-08 00:40:03.811064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811075 | orchestrator | Monday 08 September 2025 00:40:00 +0000 (0:00:00.250) 0:00:35.683 ****** 2025-09-08 00:40:03.811086 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811096 | orchestrator | 2025-09-08 00:40:03.811107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811118 | orchestrator | Monday 08 September 2025 00:40:00 +0000 (0:00:00.227) 0:00:35.910 ****** 2025-09-08 00:40:03.811129 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811140 | orchestrator | 2025-09-08 00:40:03.811151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811161 | orchestrator | Monday 08 September 2025 00:40:00 +0000 (0:00:00.213) 0:00:36.124 ****** 2025-09-08 00:40:03.811172 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811183 | orchestrator | 2025-09-08 00:40:03.811194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811205 | orchestrator | Monday 08 September 2025 00:40:00 +0000 (0:00:00.218) 0:00:36.343 ****** 2025-09-08 00:40:03.811216 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811226 | orchestrator | 2025-09-08 00:40:03.811237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811248 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.217) 0:00:36.561 ****** 2025-09-08 00:40:03.811259 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811270 | orchestrator | 2025-09-08 00:40:03.811281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811291 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.680) 0:00:37.241 ****** 2025-09-08 00:40:03.811302 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811313 | orchestrator | 2025-09-08 00:40:03.811324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811335 | orchestrator | Monday 08 September 2025 00:40:01 +0000 (0:00:00.197) 0:00:37.439 ****** 2025-09-08 00:40:03.811346 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811357 | orchestrator | 2025-09-08 00:40:03.811368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811378 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.199) 0:00:37.638 ****** 2025-09-08 00:40:03.811389 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-08 00:40:03.811400 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-08 00:40:03.811412 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-08 00:40:03.811422 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-08 00:40:03.811433 | orchestrator | 2025-09-08 00:40:03.811444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811455 | orchestrator | Monday 08 September 2025 00:40:02 +0000 (0:00:00.646) 0:00:38.285 ****** 2025-09-08 00:40:03.811466 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811477 | orchestrator | 2025-09-08 00:40:03.811487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811498 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.216) 0:00:38.502 ****** 2025-09-08 00:40:03.811516 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811527 | orchestrator | 2025-09-08 00:40:03.811538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811549 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.228) 0:00:38.730 ****** 2025-09-08 00:40:03.811560 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811571 | orchestrator | 2025-09-08 00:40:03.811581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:40:03.811592 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.289) 0:00:39.020 ****** 2025-09-08 00:40:03.811609 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:03.811620 | orchestrator | 2025-09-08 00:40:03.811631 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-08 00:40:03.811648 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.243) 0:00:39.263 ****** 2025-09-08 00:40:07.705093 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-08 00:40:07.705221 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-08 00:40:07.705234 | orchestrator | 2025-09-08 00:40:07.705246 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-08 00:40:07.705257 | orchestrator | Monday 08 September 2025 00:40:03 +0000 (0:00:00.188) 0:00:39.452 ****** 2025-09-08 00:40:07.705267 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705277 | orchestrator | 2025-09-08 00:40:07.705287 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-08 00:40:07.705297 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.136) 0:00:39.588 ****** 2025-09-08 00:40:07.705307 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705317 | orchestrator | 2025-09-08 00:40:07.705326 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-08 00:40:07.705336 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.106) 0:00:39.695 ****** 2025-09-08 00:40:07.705346 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705356 | orchestrator | 2025-09-08 00:40:07.705366 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-08 00:40:07.705375 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.119) 0:00:39.814 ****** 2025-09-08 00:40:07.705385 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:40:07.705396 | orchestrator | 2025-09-08 00:40:07.705406 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-08 00:40:07.705415 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.255) 0:00:40.070 ****** 2025-09-08 00:40:07.705426 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}}) 2025-09-08 00:40:07.705437 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}}) 2025-09-08 00:40:07.705447 | orchestrator | 2025-09-08 00:40:07.705457 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-08 00:40:07.705466 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.166) 0:00:40.236 ****** 2025-09-08 00:40:07.705477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}})  2025-09-08 00:40:07.705488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}})  2025-09-08 00:40:07.705498 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705508 | orchestrator | 2025-09-08 00:40:07.705539 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-08 00:40:07.705550 | orchestrator | Monday 08 September 2025 00:40:04 +0000 (0:00:00.134) 0:00:40.370 ****** 2025-09-08 00:40:07.705559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}})  2025-09-08 00:40:07.705569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}})  2025-09-08 00:40:07.705605 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705617 | orchestrator | 2025-09-08 00:40:07.705630 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-08 00:40:07.705641 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.132) 0:00:40.503 ****** 2025-09-08 00:40:07.705654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}})  2025-09-08 00:40:07.705665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}})  2025-09-08 00:40:07.705678 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705690 | orchestrator | 2025-09-08 00:40:07.705702 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-08 00:40:07.705714 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.132) 0:00:40.635 ****** 2025-09-08 00:40:07.705726 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:40:07.705737 | orchestrator | 2025-09-08 00:40:07.705748 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-08 00:40:07.705760 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.123) 0:00:40.759 ****** 2025-09-08 00:40:07.705772 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:40:07.705783 | orchestrator | 2025-09-08 00:40:07.705795 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-08 00:40:07.705828 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.116) 0:00:40.875 ****** 2025-09-08 00:40:07.705840 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705852 | orchestrator | 2025-09-08 00:40:07.705863 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-08 00:40:07.705875 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.130) 0:00:41.006 ****** 2025-09-08 00:40:07.705887 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705899 | orchestrator | 2025-09-08 00:40:07.705910 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-08 00:40:07.705921 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.112) 0:00:41.118 ****** 2025-09-08 00:40:07.705933 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.705945 | orchestrator | 2025-09-08 00:40:07.705957 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-08 00:40:07.705967 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.114) 0:00:41.233 ****** 2025-09-08 00:40:07.705976 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:40:07.705986 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:40:07.705996 | orchestrator |  "sdb": { 2025-09-08 00:40:07.706006 | orchestrator |  "osd_lvm_uuid": "8709f3ee-6295-5c1a-8e33-a410dc9aa8e2" 2025-09-08 00:40:07.706095 | orchestrator |  }, 2025-09-08 00:40:07.706118 | orchestrator |  "sdc": { 2025-09-08 00:40:07.706135 | orchestrator |  "osd_lvm_uuid": "2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf" 2025-09-08 00:40:07.706151 | orchestrator |  } 2025-09-08 00:40:07.706168 | orchestrator |  } 2025-09-08 00:40:07.706183 | orchestrator | } 2025-09-08 00:40:07.706199 | orchestrator | 2025-09-08 00:40:07.706215 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-08 00:40:07.706233 | orchestrator | Monday 08 September 2025 00:40:05 +0000 (0:00:00.127) 0:00:41.361 ****** 2025-09-08 00:40:07.706252 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.706269 | orchestrator | 2025-09-08 00:40:07.706287 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-08 00:40:07.706304 | orchestrator | Monday 08 September 2025 00:40:06 +0000 (0:00:00.175) 0:00:41.536 ****** 2025-09-08 00:40:07.706322 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.706339 | orchestrator | 2025-09-08 00:40:07.706354 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-08 00:40:07.706485 | orchestrator | Monday 08 September 2025 00:40:06 +0000 (0:00:00.430) 0:00:41.967 ****** 2025-09-08 00:40:07.706509 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:40:07.706529 | orchestrator | 2025-09-08 00:40:07.706550 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-08 00:40:07.706571 | orchestrator | Monday 08 September 2025 00:40:06 +0000 (0:00:00.112) 0:00:42.079 ****** 2025-09-08 00:40:07.706592 | orchestrator | changed: [testbed-node-5] => { 2025-09-08 00:40:07.706611 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-08 00:40:07.706631 | orchestrator |  "ceph_osd_devices": { 2025-09-08 00:40:07.706651 | orchestrator |  "sdb": { 2025-09-08 00:40:07.706671 | orchestrator |  "osd_lvm_uuid": "8709f3ee-6295-5c1a-8e33-a410dc9aa8e2" 2025-09-08 00:40:07.706692 | orchestrator |  }, 2025-09-08 00:40:07.706712 | orchestrator |  "sdc": { 2025-09-08 00:40:07.706732 | orchestrator |  "osd_lvm_uuid": "2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf" 2025-09-08 00:40:07.706751 | orchestrator |  } 2025-09-08 00:40:07.706769 | orchestrator |  }, 2025-09-08 00:40:07.706788 | orchestrator |  "lvm_volumes": [ 2025-09-08 00:40:07.706831 | orchestrator |  { 2025-09-08 00:40:07.706852 | orchestrator |  "data": "osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2", 2025-09-08 00:40:07.706872 | orchestrator |  "data_vg": "ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2" 2025-09-08 00:40:07.706890 | orchestrator |  }, 2025-09-08 00:40:07.706909 | orchestrator |  { 2025-09-08 00:40:07.706925 | orchestrator |  "data": "osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf", 2025-09-08 00:40:07.706944 | orchestrator |  "data_vg": "ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf" 2025-09-08 00:40:07.706962 | orchestrator |  } 2025-09-08 00:40:07.706981 | orchestrator |  ] 2025-09-08 00:40:07.707001 | orchestrator |  } 2025-09-08 00:40:07.707019 | orchestrator | } 2025-09-08 00:40:07.707043 | orchestrator | 2025-09-08 00:40:07.707062 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-08 00:40:07.707080 | orchestrator | Monday 08 September 2025 00:40:06 +0000 (0:00:00.176) 0:00:42.255 ****** 2025-09-08 00:40:07.707099 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-08 00:40:07.707118 | orchestrator | 2025-09-08 00:40:07.707137 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:40:07.707173 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 00:40:07.707195 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 00:40:07.707215 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 00:40:07.707234 | orchestrator | 2025-09-08 00:40:07.707253 | orchestrator | 2025-09-08 00:40:07.707272 | orchestrator | 2025-09-08 00:40:07.707291 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:40:07.707309 | orchestrator | Monday 08 September 2025 00:40:07 +0000 (0:00:00.883) 0:00:43.139 ****** 2025-09-08 00:40:07.707326 | orchestrator | =============================================================================== 2025-09-08 00:40:07.707343 | orchestrator | Write configuration file ------------------------------------------------ 4.34s 2025-09-08 00:40:07.707362 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2025-09-08 00:40:07.707380 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2025-09-08 00:40:07.707398 | orchestrator | Get initial list of available block devices ----------------------------- 1.14s 2025-09-08 00:40:07.707415 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-09-08 00:40:07.707433 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.97s 2025-09-08 00:40:07.707466 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2025-09-08 00:40:07.707485 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-09-08 00:40:07.707504 | orchestrator | Print DB devices -------------------------------------------------------- 0.74s 2025-09-08 00:40:07.707522 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.72s 2025-09-08 00:40:07.707540 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-08 00:40:07.707558 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-08 00:40:07.707577 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-09-08 00:40:07.707596 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.66s 2025-09-08 00:40:07.707630 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-09-08 00:40:07.933422 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-08 00:40:07.933506 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-08 00:40:07.933516 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-09-08 00:40:07.933524 | orchestrator | Set WAL devices config data --------------------------------------------- 0.63s 2025-09-08 00:40:07.933531 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-09-08 00:40:30.044472 | orchestrator | 2025-09-08 00:40:30 | INFO  | Task d4c88f1b-321b-43aa-a517-6d40d1eef72f (sync inventory) is running in background. Output coming soon. 2025-09-08 00:40:50.476963 | orchestrator | 2025-09-08 00:40:31 | INFO  | Starting group_vars file reorganization 2025-09-08 00:40:50.477091 | orchestrator | 2025-09-08 00:40:31 | INFO  | Moved 0 file(s) to their respective directories 2025-09-08 00:40:50.477104 | orchestrator | 2025-09-08 00:40:31 | INFO  | Group_vars file reorganization completed 2025-09-08 00:40:50.477112 | orchestrator | 2025-09-08 00:40:33 | INFO  | Starting variable preparation from inventory 2025-09-08 00:40:50.477121 | orchestrator | 2025-09-08 00:40:34 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-08 00:40:50.477130 | orchestrator | 2025-09-08 00:40:34 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-08 00:40:50.477138 | orchestrator | 2025-09-08 00:40:35 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-08 00:40:50.477146 | orchestrator | 2025-09-08 00:40:35 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-08 00:40:50.477154 | orchestrator | 2025-09-08 00:40:35 | INFO  | Variable preparation completed 2025-09-08 00:40:50.477162 | orchestrator | 2025-09-08 00:40:36 | INFO  | Starting inventory overwrite handling 2025-09-08 00:40:50.477170 | orchestrator | 2025-09-08 00:40:36 | INFO  | Handling group overwrites in 99-overwrite 2025-09-08 00:40:50.477179 | orchestrator | 2025-09-08 00:40:36 | INFO  | Removing group frr:children from 60-generic 2025-09-08 00:40:50.477187 | orchestrator | 2025-09-08 00:40:36 | INFO  | Removing group storage:children from 50-kolla 2025-09-08 00:40:50.477195 | orchestrator | 2025-09-08 00:40:36 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-08 00:40:50.477203 | orchestrator | 2025-09-08 00:40:36 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-08 00:40:50.477212 | orchestrator | 2025-09-08 00:40:36 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-08 00:40:50.477220 | orchestrator | 2025-09-08 00:40:36 | INFO  | Handling group overwrites in 20-roles 2025-09-08 00:40:50.477228 | orchestrator | 2025-09-08 00:40:36 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-08 00:40:50.477260 | orchestrator | 2025-09-08 00:40:36 | INFO  | Removed 6 group(s) in total 2025-09-08 00:40:50.477269 | orchestrator | 2025-09-08 00:40:36 | INFO  | Inventory overwrite handling completed 2025-09-08 00:40:50.477277 | orchestrator | 2025-09-08 00:40:37 | INFO  | Starting merge of inventory files 2025-09-08 00:40:50.477284 | orchestrator | 2025-09-08 00:40:37 | INFO  | Inventory files merged successfully 2025-09-08 00:40:50.477292 | orchestrator | 2025-09-08 00:40:41 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-08 00:40:50.477300 | orchestrator | 2025-09-08 00:40:49 | INFO  | Successfully wrote ClusterShell configuration 2025-09-08 00:40:50.477309 | orchestrator | [master 7867e2a] 2025-09-08-00-40 2025-09-08 00:40:50.477318 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-08 00:40:52.333423 | orchestrator | 2025-09-08 00:40:52 | INFO  | Task 1319741d-2f3f-42c8-8c64-9f82a14167b6 (ceph-create-lvm-devices) was prepared for execution. 2025-09-08 00:40:52.333538 | orchestrator | 2025-09-08 00:40:52 | INFO  | It takes a moment until task 1319741d-2f3f-42c8-8c64-9f82a14167b6 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-08 00:41:03.482137 | orchestrator | 2025-09-08 00:41:03.482276 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-08 00:41:03.482293 | orchestrator | 2025-09-08 00:41:03.482305 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:41:03.482317 | orchestrator | Monday 08 September 2025 00:40:56 +0000 (0:00:00.231) 0:00:00.231 ****** 2025-09-08 00:41:03.482329 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-08 00:41:03.482341 | orchestrator | 2025-09-08 00:41:03.482352 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:41:03.482363 | orchestrator | Monday 08 September 2025 00:40:56 +0000 (0:00:00.231) 0:00:00.463 ****** 2025-09-08 00:41:03.482374 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:03.482386 | orchestrator | 2025-09-08 00:41:03.482397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482408 | orchestrator | Monday 08 September 2025 00:40:56 +0000 (0:00:00.193) 0:00:00.657 ****** 2025-09-08 00:41:03.482419 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:41:03.482431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:41:03.482443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:41:03.482454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:41:03.482465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:41:03.482476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:41:03.482487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:41:03.482497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:41:03.482508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-08 00:41:03.482519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:41:03.482530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:41:03.482540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:41:03.482551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:41:03.482562 | orchestrator | 2025-09-08 00:41:03.482573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482612 | orchestrator | Monday 08 September 2025 00:40:56 +0000 (0:00:00.393) 0:00:01.050 ****** 2025-09-08 00:41:03.482626 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.482639 | orchestrator | 2025-09-08 00:41:03.482652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482685 | orchestrator | Monday 08 September 2025 00:40:57 +0000 (0:00:00.394) 0:00:01.444 ****** 2025-09-08 00:41:03.482700 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.482713 | orchestrator | 2025-09-08 00:41:03.482726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482739 | orchestrator | Monday 08 September 2025 00:40:57 +0000 (0:00:00.191) 0:00:01.636 ****** 2025-09-08 00:41:03.482776 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.482788 | orchestrator | 2025-09-08 00:41:03.482807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482820 | orchestrator | Monday 08 September 2025 00:40:57 +0000 (0:00:00.198) 0:00:01.834 ****** 2025-09-08 00:41:03.482833 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.482846 | orchestrator | 2025-09-08 00:41:03.482858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482871 | orchestrator | Monday 08 September 2025 00:40:57 +0000 (0:00:00.206) 0:00:02.040 ****** 2025-09-08 00:41:03.482884 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.482896 | orchestrator | 2025-09-08 00:41:03.482909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482922 | orchestrator | Monday 08 September 2025 00:40:58 +0000 (0:00:00.200) 0:00:02.241 ****** 2025-09-08 00:41:03.482935 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.482947 | orchestrator | 2025-09-08 00:41:03.482958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.482969 | orchestrator | Monday 08 September 2025 00:40:58 +0000 (0:00:00.202) 0:00:02.443 ****** 2025-09-08 00:41:03.482979 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.482990 | orchestrator | 2025-09-08 00:41:03.483001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.483011 | orchestrator | Monday 08 September 2025 00:40:58 +0000 (0:00:00.197) 0:00:02.641 ****** 2025-09-08 00:41:03.483022 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483033 | orchestrator | 2025-09-08 00:41:03.483044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.483055 | orchestrator | Monday 08 September 2025 00:40:58 +0000 (0:00:00.184) 0:00:02.826 ****** 2025-09-08 00:41:03.483065 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e) 2025-09-08 00:41:03.483078 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e) 2025-09-08 00:41:03.483089 | orchestrator | 2025-09-08 00:41:03.483099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.483110 | orchestrator | Monday 08 September 2025 00:40:59 +0000 (0:00:00.488) 0:00:03.314 ****** 2025-09-08 00:41:03.483138 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb) 2025-09-08 00:41:03.483151 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb) 2025-09-08 00:41:03.483162 | orchestrator | 2025-09-08 00:41:03.483172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.483183 | orchestrator | Monday 08 September 2025 00:40:59 +0000 (0:00:00.404) 0:00:03.719 ****** 2025-09-08 00:41:03.483194 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5) 2025-09-08 00:41:03.483205 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5) 2025-09-08 00:41:03.483216 | orchestrator | 2025-09-08 00:41:03.483227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.483248 | orchestrator | Monday 08 September 2025 00:41:00 +0000 (0:00:00.628) 0:00:04.347 ****** 2025-09-08 00:41:03.483259 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3) 2025-09-08 00:41:03.483270 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3) 2025-09-08 00:41:03.483281 | orchestrator | 2025-09-08 00:41:03.483291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:03.483302 | orchestrator | Monday 08 September 2025 00:41:00 +0000 (0:00:00.708) 0:00:05.055 ****** 2025-09-08 00:41:03.483313 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:41:03.483324 | orchestrator | 2025-09-08 00:41:03.483335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483346 | orchestrator | Monday 08 September 2025 00:41:01 +0000 (0:00:00.665) 0:00:05.721 ****** 2025-09-08 00:41:03.483356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-08 00:41:03.483367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-08 00:41:03.483378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-08 00:41:03.483388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-08 00:41:03.483399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-08 00:41:03.483410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-08 00:41:03.483420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-08 00:41:03.483431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-08 00:41:03.483442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-08 00:41:03.483452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-08 00:41:03.483463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-08 00:41:03.483473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-08 00:41:03.483484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-08 00:41:03.483495 | orchestrator | 2025-09-08 00:41:03.483506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483517 | orchestrator | Monday 08 September 2025 00:41:01 +0000 (0:00:00.398) 0:00:06.120 ****** 2025-09-08 00:41:03.483527 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483538 | orchestrator | 2025-09-08 00:41:03.483549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483560 | orchestrator | Monday 08 September 2025 00:41:02 +0000 (0:00:00.190) 0:00:06.310 ****** 2025-09-08 00:41:03.483570 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483581 | orchestrator | 2025-09-08 00:41:03.483592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483603 | orchestrator | Monday 08 September 2025 00:41:02 +0000 (0:00:00.256) 0:00:06.567 ****** 2025-09-08 00:41:03.483613 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483624 | orchestrator | 2025-09-08 00:41:03.483635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483646 | orchestrator | Monday 08 September 2025 00:41:02 +0000 (0:00:00.171) 0:00:06.738 ****** 2025-09-08 00:41:03.483656 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483667 | orchestrator | 2025-09-08 00:41:03.483678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483689 | orchestrator | Monday 08 September 2025 00:41:02 +0000 (0:00:00.195) 0:00:06.934 ****** 2025-09-08 00:41:03.483707 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483718 | orchestrator | 2025-09-08 00:41:03.483728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483739 | orchestrator | Monday 08 September 2025 00:41:02 +0000 (0:00:00.179) 0:00:07.114 ****** 2025-09-08 00:41:03.483768 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483780 | orchestrator | 2025-09-08 00:41:03.483791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483802 | orchestrator | Monday 08 September 2025 00:41:03 +0000 (0:00:00.210) 0:00:07.324 ****** 2025-09-08 00:41:03.483812 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:03.483823 | orchestrator | 2025-09-08 00:41:03.483834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:03.483845 | orchestrator | Monday 08 September 2025 00:41:03 +0000 (0:00:00.187) 0:00:07.512 ****** 2025-09-08 00:41:03.483862 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.588926 | orchestrator | 2025-09-08 00:41:11.589067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:11.589085 | orchestrator | Monday 08 September 2025 00:41:03 +0000 (0:00:00.183) 0:00:07.695 ****** 2025-09-08 00:41:11.589097 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-08 00:41:11.589110 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-08 00:41:11.589122 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-08 00:41:11.589134 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-08 00:41:11.589145 | orchestrator | 2025-09-08 00:41:11.589156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:11.589168 | orchestrator | Monday 08 September 2025 00:41:04 +0000 (0:00:00.908) 0:00:08.604 ****** 2025-09-08 00:41:11.589179 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589190 | orchestrator | 2025-09-08 00:41:11.589201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:11.589212 | orchestrator | Monday 08 September 2025 00:41:04 +0000 (0:00:00.201) 0:00:08.806 ****** 2025-09-08 00:41:11.589223 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589234 | orchestrator | 2025-09-08 00:41:11.589245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:11.589256 | orchestrator | Monday 08 September 2025 00:41:04 +0000 (0:00:00.175) 0:00:08.982 ****** 2025-09-08 00:41:11.589266 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589277 | orchestrator | 2025-09-08 00:41:11.589289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:11.589300 | orchestrator | Monday 08 September 2025 00:41:04 +0000 (0:00:00.187) 0:00:09.169 ****** 2025-09-08 00:41:11.589311 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589322 | orchestrator | 2025-09-08 00:41:11.589333 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-08 00:41:11.589344 | orchestrator | Monday 08 September 2025 00:41:05 +0000 (0:00:00.218) 0:00:09.388 ****** 2025-09-08 00:41:11.589355 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589366 | orchestrator | 2025-09-08 00:41:11.589376 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-08 00:41:11.589387 | orchestrator | Monday 08 September 2025 00:41:05 +0000 (0:00:00.126) 0:00:09.514 ****** 2025-09-08 00:41:11.589402 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6245231a-5e27-588f-a545-a88193777b58'}}) 2025-09-08 00:41:11.589416 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}}) 2025-09-08 00:41:11.589428 | orchestrator | 2025-09-08 00:41:11.589441 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-08 00:41:11.589454 | orchestrator | Monday 08 September 2025 00:41:05 +0000 (0:00:00.227) 0:00:09.742 ****** 2025-09-08 00:41:11.589469 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'}) 2025-09-08 00:41:11.589511 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}) 2025-09-08 00:41:11.589525 | orchestrator | 2025-09-08 00:41:11.589559 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-08 00:41:11.589572 | orchestrator | Monday 08 September 2025 00:41:07 +0000 (0:00:01.938) 0:00:11.681 ****** 2025-09-08 00:41:11.589590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.589606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.589619 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589632 | orchestrator | 2025-09-08 00:41:11.589645 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-08 00:41:11.589658 | orchestrator | Monday 08 September 2025 00:41:07 +0000 (0:00:00.161) 0:00:11.842 ****** 2025-09-08 00:41:11.589671 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'}) 2025-09-08 00:41:11.589684 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}) 2025-09-08 00:41:11.589696 | orchestrator | 2025-09-08 00:41:11.589709 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-08 00:41:11.589723 | orchestrator | Monday 08 September 2025 00:41:09 +0000 (0:00:01.535) 0:00:13.379 ****** 2025-09-08 00:41:11.589736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.589768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.589781 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589792 | orchestrator | 2025-09-08 00:41:11.589803 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-08 00:41:11.589814 | orchestrator | Monday 08 September 2025 00:41:09 +0000 (0:00:00.170) 0:00:13.549 ****** 2025-09-08 00:41:11.589825 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589836 | orchestrator | 2025-09-08 00:41:11.589846 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-08 00:41:11.589875 | orchestrator | Monday 08 September 2025 00:41:09 +0000 (0:00:00.169) 0:00:13.718 ****** 2025-09-08 00:41:11.589887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.589898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.589909 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589920 | orchestrator | 2025-09-08 00:41:11.589931 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-08 00:41:11.589941 | orchestrator | Monday 08 September 2025 00:41:09 +0000 (0:00:00.406) 0:00:14.125 ****** 2025-09-08 00:41:11.589952 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.589963 | orchestrator | 2025-09-08 00:41:11.589974 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-08 00:41:11.589985 | orchestrator | Monday 08 September 2025 00:41:10 +0000 (0:00:00.172) 0:00:14.297 ****** 2025-09-08 00:41:11.589995 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.590070 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.590085 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590096 | orchestrator | 2025-09-08 00:41:11.590107 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-08 00:41:11.590118 | orchestrator | Monday 08 September 2025 00:41:10 +0000 (0:00:00.179) 0:00:14.477 ****** 2025-09-08 00:41:11.590128 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590139 | orchestrator | 2025-09-08 00:41:11.590150 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-08 00:41:11.590161 | orchestrator | Monday 08 September 2025 00:41:10 +0000 (0:00:00.161) 0:00:14.638 ****** 2025-09-08 00:41:11.590172 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.590182 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.590193 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590204 | orchestrator | 2025-09-08 00:41:11.590215 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-08 00:41:11.590226 | orchestrator | Monday 08 September 2025 00:41:10 +0000 (0:00:00.146) 0:00:14.785 ****** 2025-09-08 00:41:11.590237 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:11.590248 | orchestrator | 2025-09-08 00:41:11.590259 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-08 00:41:11.590270 | orchestrator | Monday 08 September 2025 00:41:10 +0000 (0:00:00.145) 0:00:14.931 ****** 2025-09-08 00:41:11.590281 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.590298 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.590309 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590320 | orchestrator | 2025-09-08 00:41:11.590331 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-08 00:41:11.590342 | orchestrator | Monday 08 September 2025 00:41:10 +0000 (0:00:00.166) 0:00:15.097 ****** 2025-09-08 00:41:11.590353 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.590364 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.590375 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590386 | orchestrator | 2025-09-08 00:41:11.590397 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-08 00:41:11.590408 | orchestrator | Monday 08 September 2025 00:41:11 +0000 (0:00:00.170) 0:00:15.268 ****** 2025-09-08 00:41:11.590418 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:11.590429 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:11.590440 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590451 | orchestrator | 2025-09-08 00:41:11.590462 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-08 00:41:11.590473 | orchestrator | Monday 08 September 2025 00:41:11 +0000 (0:00:00.185) 0:00:15.454 ****** 2025-09-08 00:41:11.590484 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590495 | orchestrator | 2025-09-08 00:41:11.590505 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-08 00:41:11.590524 | orchestrator | Monday 08 September 2025 00:41:11 +0000 (0:00:00.179) 0:00:15.633 ****** 2025-09-08 00:41:11.590535 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:11.590546 | orchestrator | 2025-09-08 00:41:11.590564 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-08 00:41:18.034730 | orchestrator | Monday 08 September 2025 00:41:11 +0000 (0:00:00.169) 0:00:15.803 ****** 2025-09-08 00:41:18.034984 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035000 | orchestrator | 2025-09-08 00:41:18.035013 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-08 00:41:18.035024 | orchestrator | Monday 08 September 2025 00:41:11 +0000 (0:00:00.163) 0:00:15.967 ****** 2025-09-08 00:41:18.035036 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:18.035047 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-08 00:41:18.035059 | orchestrator | } 2025-09-08 00:41:18.035070 | orchestrator | 2025-09-08 00:41:18.035081 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-08 00:41:18.035092 | orchestrator | Monday 08 September 2025 00:41:12 +0000 (0:00:00.370) 0:00:16.337 ****** 2025-09-08 00:41:18.035103 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:18.035114 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-08 00:41:18.035125 | orchestrator | } 2025-09-08 00:41:18.035136 | orchestrator | 2025-09-08 00:41:18.035147 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-08 00:41:18.035158 | orchestrator | Monday 08 September 2025 00:41:12 +0000 (0:00:00.170) 0:00:16.508 ****** 2025-09-08 00:41:18.035169 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:18.035180 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-08 00:41:18.035191 | orchestrator | } 2025-09-08 00:41:18.035202 | orchestrator | 2025-09-08 00:41:18.035214 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-08 00:41:18.035225 | orchestrator | Monday 08 September 2025 00:41:12 +0000 (0:00:00.155) 0:00:16.664 ****** 2025-09-08 00:41:18.035236 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:18.035250 | orchestrator | 2025-09-08 00:41:18.035263 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-08 00:41:18.035275 | orchestrator | Monday 08 September 2025 00:41:13 +0000 (0:00:00.735) 0:00:17.399 ****** 2025-09-08 00:41:18.035288 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:18.035301 | orchestrator | 2025-09-08 00:41:18.035313 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-08 00:41:18.035327 | orchestrator | Monday 08 September 2025 00:41:13 +0000 (0:00:00.565) 0:00:17.965 ****** 2025-09-08 00:41:18.035340 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:18.035354 | orchestrator | 2025-09-08 00:41:18.035368 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-08 00:41:18.035380 | orchestrator | Monday 08 September 2025 00:41:14 +0000 (0:00:00.622) 0:00:18.587 ****** 2025-09-08 00:41:18.035393 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:18.035405 | orchestrator | 2025-09-08 00:41:18.035418 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-08 00:41:18.035431 | orchestrator | Monday 08 September 2025 00:41:14 +0000 (0:00:00.164) 0:00:18.752 ****** 2025-09-08 00:41:18.035444 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035456 | orchestrator | 2025-09-08 00:41:18.035469 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-08 00:41:18.035483 | orchestrator | Monday 08 September 2025 00:41:14 +0000 (0:00:00.168) 0:00:18.920 ****** 2025-09-08 00:41:18.035496 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035508 | orchestrator | 2025-09-08 00:41:18.035522 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-08 00:41:18.035535 | orchestrator | Monday 08 September 2025 00:41:14 +0000 (0:00:00.162) 0:00:19.083 ****** 2025-09-08 00:41:18.035548 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:18.035588 | orchestrator |  "vgs_report": { 2025-09-08 00:41:18.035602 | orchestrator |  "vg": [] 2025-09-08 00:41:18.035615 | orchestrator |  } 2025-09-08 00:41:18.035625 | orchestrator | } 2025-09-08 00:41:18.035636 | orchestrator | 2025-09-08 00:41:18.035647 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-08 00:41:18.035658 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.145) 0:00:19.229 ****** 2025-09-08 00:41:18.035669 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035680 | orchestrator | 2025-09-08 00:41:18.035690 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-08 00:41:18.035701 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.155) 0:00:19.384 ****** 2025-09-08 00:41:18.035712 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035722 | orchestrator | 2025-09-08 00:41:18.035753 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-08 00:41:18.035765 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.164) 0:00:19.548 ****** 2025-09-08 00:41:18.035776 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035787 | orchestrator | 2025-09-08 00:41:18.035798 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-08 00:41:18.035809 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.264) 0:00:19.813 ****** 2025-09-08 00:41:18.035819 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035830 | orchestrator | 2025-09-08 00:41:18.035841 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-08 00:41:18.035851 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.128) 0:00:19.942 ****** 2025-09-08 00:41:18.035862 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035873 | orchestrator | 2025-09-08 00:41:18.035901 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-08 00:41:18.035913 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.124) 0:00:20.067 ****** 2025-09-08 00:41:18.035924 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035934 | orchestrator | 2025-09-08 00:41:18.035945 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-08 00:41:18.035956 | orchestrator | Monday 08 September 2025 00:41:15 +0000 (0:00:00.131) 0:00:20.198 ****** 2025-09-08 00:41:18.035967 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.035977 | orchestrator | 2025-09-08 00:41:18.035988 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-08 00:41:18.035999 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.124) 0:00:20.323 ****** 2025-09-08 00:41:18.036010 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036020 | orchestrator | 2025-09-08 00:41:18.036031 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-08 00:41:18.036062 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.132) 0:00:20.456 ****** 2025-09-08 00:41:18.036073 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036084 | orchestrator | 2025-09-08 00:41:18.036095 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-08 00:41:18.036106 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.132) 0:00:20.588 ****** 2025-09-08 00:41:18.036116 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036127 | orchestrator | 2025-09-08 00:41:18.036138 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-08 00:41:18.036148 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.115) 0:00:20.703 ****** 2025-09-08 00:41:18.036159 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036170 | orchestrator | 2025-09-08 00:41:18.036180 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-08 00:41:18.036191 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.128) 0:00:20.832 ****** 2025-09-08 00:41:18.036202 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036213 | orchestrator | 2025-09-08 00:41:18.036224 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-08 00:41:18.036243 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.124) 0:00:20.956 ****** 2025-09-08 00:41:18.036254 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036264 | orchestrator | 2025-09-08 00:41:18.036275 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-08 00:41:18.036286 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.124) 0:00:21.080 ****** 2025-09-08 00:41:18.036297 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036307 | orchestrator | 2025-09-08 00:41:18.036318 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-08 00:41:18.036329 | orchestrator | Monday 08 September 2025 00:41:16 +0000 (0:00:00.121) 0:00:21.202 ****** 2025-09-08 00:41:18.036341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:18.036355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:18.036365 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036376 | orchestrator | 2025-09-08 00:41:18.036387 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-08 00:41:18.036398 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.140) 0:00:21.342 ****** 2025-09-08 00:41:18.036409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:18.036420 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:18.036431 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036442 | orchestrator | 2025-09-08 00:41:18.036453 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-08 00:41:18.036463 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.293) 0:00:21.635 ****** 2025-09-08 00:41:18.036479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:18.036491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:18.036502 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036513 | orchestrator | 2025-09-08 00:41:18.036523 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-08 00:41:18.036534 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.167) 0:00:21.803 ****** 2025-09-08 00:41:18.036545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:18.036556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:18.036567 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036578 | orchestrator | 2025-09-08 00:41:18.036588 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-08 00:41:18.036599 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.154) 0:00:21.958 ****** 2025-09-08 00:41:18.036610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:18.036621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:18.036632 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:18.036642 | orchestrator | 2025-09-08 00:41:18.036653 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-08 00:41:18.036671 | orchestrator | Monday 08 September 2025 00:41:17 +0000 (0:00:00.153) 0:00:22.112 ****** 2025-09-08 00:41:18.036682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:18.036699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:23.109454 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:23.109616 | orchestrator | 2025-09-08 00:41:23.109645 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-08 00:41:23.109668 | orchestrator | Monday 08 September 2025 00:41:18 +0000 (0:00:00.139) 0:00:22.251 ****** 2025-09-08 00:41:23.109682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:23.109696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:23.109707 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:23.109718 | orchestrator | 2025-09-08 00:41:23.109763 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-08 00:41:23.109822 | orchestrator | Monday 08 September 2025 00:41:18 +0000 (0:00:00.148) 0:00:22.399 ****** 2025-09-08 00:41:23.109836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:23.109848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:23.109859 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:23.109870 | orchestrator | 2025-09-08 00:41:23.109882 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-08 00:41:23.109893 | orchestrator | Monday 08 September 2025 00:41:18 +0000 (0:00:00.152) 0:00:22.552 ****** 2025-09-08 00:41:23.109904 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:23.109916 | orchestrator | 2025-09-08 00:41:23.109927 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-08 00:41:23.109938 | orchestrator | Monday 08 September 2025 00:41:18 +0000 (0:00:00.536) 0:00:23.088 ****** 2025-09-08 00:41:23.109949 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:23.109962 | orchestrator | 2025-09-08 00:41:23.109975 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-08 00:41:23.109987 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.514) 0:00:23.603 ****** 2025-09-08 00:41:23.109999 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:41:23.110012 | orchestrator | 2025-09-08 00:41:23.110086 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-08 00:41:23.110100 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.128) 0:00:23.732 ****** 2025-09-08 00:41:23.110113 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'vg_name': 'ceph-6245231a-5e27-588f-a545-a88193777b58'}) 2025-09-08 00:41:23.110127 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'vg_name': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}) 2025-09-08 00:41:23.110141 | orchestrator | 2025-09-08 00:41:23.110154 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-08 00:41:23.110168 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.193) 0:00:23.925 ****** 2025-09-08 00:41:23.110180 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:23.110193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:23.110236 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:23.110250 | orchestrator | 2025-09-08 00:41:23.110262 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-08 00:41:23.110275 | orchestrator | Monday 08 September 2025 00:41:19 +0000 (0:00:00.160) 0:00:24.085 ****** 2025-09-08 00:41:23.110288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:23.110301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:23.110314 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:23.110324 | orchestrator | 2025-09-08 00:41:23.110335 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-08 00:41:23.110346 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.319) 0:00:24.405 ****** 2025-09-08 00:41:23.110357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'})  2025-09-08 00:41:23.110369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'})  2025-09-08 00:41:23.110380 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:41:23.110391 | orchestrator | 2025-09-08 00:41:23.110402 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-08 00:41:23.110413 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.143) 0:00:24.548 ****** 2025-09-08 00:41:23.110424 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 00:41:23.110435 | orchestrator |  "lvm_report": { 2025-09-08 00:41:23.110446 | orchestrator |  "lv": [ 2025-09-08 00:41:23.110457 | orchestrator |  { 2025-09-08 00:41:23.110489 | orchestrator |  "lv_name": "osd-block-6245231a-5e27-588f-a545-a88193777b58", 2025-09-08 00:41:23.110502 | orchestrator |  "vg_name": "ceph-6245231a-5e27-588f-a545-a88193777b58" 2025-09-08 00:41:23.110513 | orchestrator |  }, 2025-09-08 00:41:23.110524 | orchestrator |  { 2025-09-08 00:41:23.110535 | orchestrator |  "lv_name": "osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db", 2025-09-08 00:41:23.110546 | orchestrator |  "vg_name": "ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db" 2025-09-08 00:41:23.110557 | orchestrator |  } 2025-09-08 00:41:23.110568 | orchestrator |  ], 2025-09-08 00:41:23.110578 | orchestrator |  "pv": [ 2025-09-08 00:41:23.110589 | orchestrator |  { 2025-09-08 00:41:23.110600 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-08 00:41:23.110611 | orchestrator |  "vg_name": "ceph-6245231a-5e27-588f-a545-a88193777b58" 2025-09-08 00:41:23.110622 | orchestrator |  }, 2025-09-08 00:41:23.110633 | orchestrator |  { 2025-09-08 00:41:23.110643 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-08 00:41:23.110654 | orchestrator |  "vg_name": "ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db" 2025-09-08 00:41:23.110665 | orchestrator |  } 2025-09-08 00:41:23.110676 | orchestrator |  ] 2025-09-08 00:41:23.110687 | orchestrator |  } 2025-09-08 00:41:23.110698 | orchestrator | } 2025-09-08 00:41:23.110709 | orchestrator | 2025-09-08 00:41:23.110720 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-08 00:41:23.110755 | orchestrator | 2025-09-08 00:41:23.110766 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:41:23.110777 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.319) 0:00:24.868 ****** 2025-09-08 00:41:23.110788 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-08 00:41:23.110800 | orchestrator | 2025-09-08 00:41:23.110819 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:41:23.110830 | orchestrator | Monday 08 September 2025 00:41:20 +0000 (0:00:00.210) 0:00:25.078 ****** 2025-09-08 00:41:23.110840 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:23.110852 | orchestrator | 2025-09-08 00:41:23.110862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.110873 | orchestrator | Monday 08 September 2025 00:41:21 +0000 (0:00:00.229) 0:00:25.308 ****** 2025-09-08 00:41:23.110902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:41:23.110913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:41:23.110924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:41:23.110935 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:41:23.110946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:41:23.110957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:41:23.110967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:41:23.110978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:41:23.110994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-08 00:41:23.111005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:41:23.111016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:41:23.111027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:41:23.111038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:41:23.111049 | orchestrator | 2025-09-08 00:41:23.111060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.111071 | orchestrator | Monday 08 September 2025 00:41:21 +0000 (0:00:00.369) 0:00:25.677 ****** 2025-09-08 00:41:23.111081 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:23.111092 | orchestrator | 2025-09-08 00:41:23.111103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.111114 | orchestrator | Monday 08 September 2025 00:41:21 +0000 (0:00:00.194) 0:00:25.871 ****** 2025-09-08 00:41:23.111125 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:23.111136 | orchestrator | 2025-09-08 00:41:23.111146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.111157 | orchestrator | Monday 08 September 2025 00:41:21 +0000 (0:00:00.177) 0:00:26.049 ****** 2025-09-08 00:41:23.111168 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:23.111179 | orchestrator | 2025-09-08 00:41:23.111189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.111200 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.184) 0:00:26.233 ****** 2025-09-08 00:41:23.111211 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:23.111222 | orchestrator | 2025-09-08 00:41:23.111232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.111243 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.457) 0:00:26.691 ****** 2025-09-08 00:41:23.111254 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:23.111265 | orchestrator | 2025-09-08 00:41:23.111275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.111286 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.205) 0:00:26.897 ****** 2025-09-08 00:41:23.111297 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:23.111308 | orchestrator | 2025-09-08 00:41:23.111318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:23.111337 | orchestrator | Monday 08 September 2025 00:41:22 +0000 (0:00:00.215) 0:00:27.112 ****** 2025-09-08 00:41:23.111348 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:23.111359 | orchestrator | 2025-09-08 00:41:23.111377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:33.648224 | orchestrator | Monday 08 September 2025 00:41:23 +0000 (0:00:00.213) 0:00:27.326 ****** 2025-09-08 00:41:33.648354 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.648371 | orchestrator | 2025-09-08 00:41:33.648384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:33.648395 | orchestrator | Monday 08 September 2025 00:41:23 +0000 (0:00:00.184) 0:00:27.511 ****** 2025-09-08 00:41:33.648407 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9) 2025-09-08 00:41:33.648419 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9) 2025-09-08 00:41:33.648430 | orchestrator | 2025-09-08 00:41:33.648441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:33.648452 | orchestrator | Monday 08 September 2025 00:41:23 +0000 (0:00:00.402) 0:00:27.913 ****** 2025-09-08 00:41:33.648463 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359) 2025-09-08 00:41:33.648474 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359) 2025-09-08 00:41:33.648485 | orchestrator | 2025-09-08 00:41:33.648496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:33.648507 | orchestrator | Monday 08 September 2025 00:41:24 +0000 (0:00:00.404) 0:00:28.318 ****** 2025-09-08 00:41:33.648517 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98) 2025-09-08 00:41:33.648528 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98) 2025-09-08 00:41:33.648539 | orchestrator | 2025-09-08 00:41:33.648550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:33.648561 | orchestrator | Monday 08 September 2025 00:41:24 +0000 (0:00:00.407) 0:00:28.725 ****** 2025-09-08 00:41:33.648572 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72) 2025-09-08 00:41:33.648583 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72) 2025-09-08 00:41:33.648594 | orchestrator | 2025-09-08 00:41:33.648604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:33.648615 | orchestrator | Monday 08 September 2025 00:41:24 +0000 (0:00:00.448) 0:00:29.174 ****** 2025-09-08 00:41:33.648626 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:41:33.648637 | orchestrator | 2025-09-08 00:41:33.648648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.648659 | orchestrator | Monday 08 September 2025 00:41:25 +0000 (0:00:00.344) 0:00:29.519 ****** 2025-09-08 00:41:33.648669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-08 00:41:33.648701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-08 00:41:33.648713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-08 00:41:33.648755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-08 00:41:33.648769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-08 00:41:33.648781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-08 00:41:33.648793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-08 00:41:33.648833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-08 00:41:33.648847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-08 00:41:33.648860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-08 00:41:33.648872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-08 00:41:33.648885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-08 00:41:33.648898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-08 00:41:33.648911 | orchestrator | 2025-09-08 00:41:33.648923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.648935 | orchestrator | Monday 08 September 2025 00:41:25 +0000 (0:00:00.673) 0:00:30.192 ****** 2025-09-08 00:41:33.648949 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.648962 | orchestrator | 2025-09-08 00:41:33.648974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.648987 | orchestrator | Monday 08 September 2025 00:41:26 +0000 (0:00:00.223) 0:00:30.416 ****** 2025-09-08 00:41:33.649000 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649012 | orchestrator | 2025-09-08 00:41:33.649026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649039 | orchestrator | Monday 08 September 2025 00:41:26 +0000 (0:00:00.196) 0:00:30.612 ****** 2025-09-08 00:41:33.649052 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649065 | orchestrator | 2025-09-08 00:41:33.649077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649087 | orchestrator | Monday 08 September 2025 00:41:26 +0000 (0:00:00.219) 0:00:30.831 ****** 2025-09-08 00:41:33.649098 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649109 | orchestrator | 2025-09-08 00:41:33.649137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649149 | orchestrator | Monday 08 September 2025 00:41:26 +0000 (0:00:00.213) 0:00:31.045 ****** 2025-09-08 00:41:33.649160 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649171 | orchestrator | 2025-09-08 00:41:33.649181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649192 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.239) 0:00:31.285 ****** 2025-09-08 00:41:33.649203 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649214 | orchestrator | 2025-09-08 00:41:33.649224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649235 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.193) 0:00:31.478 ****** 2025-09-08 00:41:33.649246 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649256 | orchestrator | 2025-09-08 00:41:33.649267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649277 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.199) 0:00:31.678 ****** 2025-09-08 00:41:33.649288 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649299 | orchestrator | 2025-09-08 00:41:33.649310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649321 | orchestrator | Monday 08 September 2025 00:41:27 +0000 (0:00:00.203) 0:00:31.882 ****** 2025-09-08 00:41:33.649331 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-08 00:41:33.649342 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-08 00:41:33.649353 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-08 00:41:33.649364 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-08 00:41:33.649375 | orchestrator | 2025-09-08 00:41:33.649385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649397 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.910) 0:00:32.793 ****** 2025-09-08 00:41:33.649416 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649427 | orchestrator | 2025-09-08 00:41:33.649437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649448 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.195) 0:00:32.988 ****** 2025-09-08 00:41:33.649459 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649470 | orchestrator | 2025-09-08 00:41:33.649480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649491 | orchestrator | Monday 08 September 2025 00:41:28 +0000 (0:00:00.186) 0:00:33.175 ****** 2025-09-08 00:41:33.649502 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649512 | orchestrator | 2025-09-08 00:41:33.649523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:33.649534 | orchestrator | Monday 08 September 2025 00:41:29 +0000 (0:00:00.707) 0:00:33.882 ****** 2025-09-08 00:41:33.649544 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649555 | orchestrator | 2025-09-08 00:41:33.649566 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-08 00:41:33.649576 | orchestrator | Monday 08 September 2025 00:41:29 +0000 (0:00:00.206) 0:00:34.089 ****** 2025-09-08 00:41:33.649587 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649598 | orchestrator | 2025-09-08 00:41:33.649609 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-08 00:41:33.649620 | orchestrator | Monday 08 September 2025 00:41:30 +0000 (0:00:00.176) 0:00:34.266 ****** 2025-09-08 00:41:33.649631 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}}) 2025-09-08 00:41:33.649642 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e84ec590-0593-5433-8536-9c5125166743'}}) 2025-09-08 00:41:33.649653 | orchestrator | 2025-09-08 00:41:33.649664 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-08 00:41:33.649675 | orchestrator | Monday 08 September 2025 00:41:30 +0000 (0:00:00.199) 0:00:34.465 ****** 2025-09-08 00:41:33.649687 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}) 2025-09-08 00:41:33.649700 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'}) 2025-09-08 00:41:33.649711 | orchestrator | 2025-09-08 00:41:33.649738 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-08 00:41:33.649749 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:01.845) 0:00:36.310 ****** 2025-09-08 00:41:33.649760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:33.649772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:33.649784 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:33.649794 | orchestrator | 2025-09-08 00:41:33.649805 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-08 00:41:33.649816 | orchestrator | Monday 08 September 2025 00:41:32 +0000 (0:00:00.195) 0:00:36.506 ****** 2025-09-08 00:41:33.649826 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}) 2025-09-08 00:41:33.649837 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'}) 2025-09-08 00:41:33.649848 | orchestrator | 2025-09-08 00:41:33.649865 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-08 00:41:39.357350 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:01.350) 0:00:37.857 ****** 2025-09-08 00:41:39.357528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:39.357548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:39.357578 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.357664 | orchestrator | 2025-09-08 00:41:39.357680 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-08 00:41:39.357693 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:00.187) 0:00:38.044 ****** 2025-09-08 00:41:39.357704 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.357752 | orchestrator | 2025-09-08 00:41:39.357765 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-08 00:41:39.357777 | orchestrator | Monday 08 September 2025 00:41:33 +0000 (0:00:00.159) 0:00:38.204 ****** 2025-09-08 00:41:39.357789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:39.357821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:39.357833 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.357844 | orchestrator | 2025-09-08 00:41:39.357856 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-08 00:41:39.357869 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.161) 0:00:38.365 ****** 2025-09-08 00:41:39.357881 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.357894 | orchestrator | 2025-09-08 00:41:39.357907 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-08 00:41:39.357921 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.160) 0:00:38.526 ****** 2025-09-08 00:41:39.357934 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:39.357948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:39.357960 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.357973 | orchestrator | 2025-09-08 00:41:39.357986 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-08 00:41:39.357999 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.144) 0:00:38.670 ****** 2025-09-08 00:41:39.358012 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358080 | orchestrator | 2025-09-08 00:41:39.358100 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-08 00:41:39.358113 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.335) 0:00:39.006 ****** 2025-09-08 00:41:39.358125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:39.358138 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:39.358151 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358164 | orchestrator | 2025-09-08 00:41:39.358176 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-08 00:41:39.358190 | orchestrator | Monday 08 September 2025 00:41:34 +0000 (0:00:00.163) 0:00:39.169 ****** 2025-09-08 00:41:39.358203 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:39.358216 | orchestrator | 2025-09-08 00:41:39.358226 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-08 00:41:39.358237 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.150) 0:00:39.320 ****** 2025-09-08 00:41:39.358258 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:39.358270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:39.358281 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358292 | orchestrator | 2025-09-08 00:41:39.358303 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-08 00:41:39.358314 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.161) 0:00:39.481 ****** 2025-09-08 00:41:39.358325 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:39.358336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:39.358348 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358359 | orchestrator | 2025-09-08 00:41:39.358370 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-08 00:41:39.358381 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.160) 0:00:39.642 ****** 2025-09-08 00:41:39.358410 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:39.358422 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:39.358433 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358444 | orchestrator | 2025-09-08 00:41:39.358454 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-08 00:41:39.358465 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.156) 0:00:39.798 ****** 2025-09-08 00:41:39.358476 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358487 | orchestrator | 2025-09-08 00:41:39.358498 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-08 00:41:39.358508 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.142) 0:00:39.941 ****** 2025-09-08 00:41:39.358519 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358530 | orchestrator | 2025-09-08 00:41:39.358540 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-08 00:41:39.358551 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.130) 0:00:40.072 ****** 2025-09-08 00:41:39.358562 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.358572 | orchestrator | 2025-09-08 00:41:39.358583 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-08 00:41:39.358593 | orchestrator | Monday 08 September 2025 00:41:35 +0000 (0:00:00.141) 0:00:40.213 ****** 2025-09-08 00:41:39.358604 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:39.358615 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-08 00:41:39.358627 | orchestrator | } 2025-09-08 00:41:39.358638 | orchestrator | 2025-09-08 00:41:39.358648 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-08 00:41:39.358659 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.168) 0:00:40.382 ****** 2025-09-08 00:41:39.358670 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:39.358681 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-08 00:41:39.358691 | orchestrator | } 2025-09-08 00:41:39.358702 | orchestrator | 2025-09-08 00:41:39.358731 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-08 00:41:39.358742 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.152) 0:00:40.534 ****** 2025-09-08 00:41:39.358753 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:39.358764 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-08 00:41:39.358775 | orchestrator | } 2025-09-08 00:41:39.358794 | orchestrator | 2025-09-08 00:41:39.358805 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-08 00:41:39.358816 | orchestrator | Monday 08 September 2025 00:41:36 +0000 (0:00:00.134) 0:00:40.669 ****** 2025-09-08 00:41:39.358826 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:39.358837 | orchestrator | 2025-09-08 00:41:39.358848 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-08 00:41:39.358859 | orchestrator | Monday 08 September 2025 00:41:37 +0000 (0:00:00.720) 0:00:41.390 ****** 2025-09-08 00:41:39.358869 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:39.358880 | orchestrator | 2025-09-08 00:41:39.358896 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-08 00:41:39.358907 | orchestrator | Monday 08 September 2025 00:41:37 +0000 (0:00:00.530) 0:00:41.920 ****** 2025-09-08 00:41:39.358918 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:39.358929 | orchestrator | 2025-09-08 00:41:39.358940 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-08 00:41:39.358950 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.540) 0:00:42.461 ****** 2025-09-08 00:41:39.358961 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:39.358972 | orchestrator | 2025-09-08 00:41:39.358983 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-08 00:41:39.358993 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.154) 0:00:42.615 ****** 2025-09-08 00:41:39.359004 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.359015 | orchestrator | 2025-09-08 00:41:39.359026 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-08 00:41:39.359036 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.126) 0:00:42.742 ****** 2025-09-08 00:41:39.359047 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.359058 | orchestrator | 2025-09-08 00:41:39.359068 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-08 00:41:39.359079 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.117) 0:00:42.859 ****** 2025-09-08 00:41:39.359090 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:39.359101 | orchestrator |  "vgs_report": { 2025-09-08 00:41:39.359111 | orchestrator |  "vg": [] 2025-09-08 00:41:39.359122 | orchestrator |  } 2025-09-08 00:41:39.359133 | orchestrator | } 2025-09-08 00:41:39.359144 | orchestrator | 2025-09-08 00:41:39.359154 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-08 00:41:39.359165 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.147) 0:00:43.006 ****** 2025-09-08 00:41:39.359176 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.359187 | orchestrator | 2025-09-08 00:41:39.359197 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-08 00:41:39.359208 | orchestrator | Monday 08 September 2025 00:41:38 +0000 (0:00:00.145) 0:00:43.152 ****** 2025-09-08 00:41:39.359219 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.359229 | orchestrator | 2025-09-08 00:41:39.359240 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-08 00:41:39.359251 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.137) 0:00:43.290 ****** 2025-09-08 00:41:39.359261 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.359272 | orchestrator | 2025-09-08 00:41:39.359283 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-08 00:41:39.359294 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.138) 0:00:43.428 ****** 2025-09-08 00:41:39.359304 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:39.359315 | orchestrator | 2025-09-08 00:41:39.359326 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-08 00:41:39.359344 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.140) 0:00:43.568 ****** 2025-09-08 00:41:44.326320 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326447 | orchestrator | 2025-09-08 00:41:44.326462 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-08 00:41:44.326499 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.183) 0:00:43.752 ****** 2025-09-08 00:41:44.326509 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326519 | orchestrator | 2025-09-08 00:41:44.326529 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-08 00:41:44.326539 | orchestrator | Monday 08 September 2025 00:41:39 +0000 (0:00:00.367) 0:00:44.119 ****** 2025-09-08 00:41:44.326548 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326558 | orchestrator | 2025-09-08 00:41:44.326568 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-08 00:41:44.326577 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.145) 0:00:44.264 ****** 2025-09-08 00:41:44.326587 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326596 | orchestrator | 2025-09-08 00:41:44.326606 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-08 00:41:44.326615 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.137) 0:00:44.402 ****** 2025-09-08 00:41:44.326624 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326634 | orchestrator | 2025-09-08 00:41:44.326643 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-08 00:41:44.326653 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.145) 0:00:44.547 ****** 2025-09-08 00:41:44.326662 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326672 | orchestrator | 2025-09-08 00:41:44.326681 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-08 00:41:44.326691 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.147) 0:00:44.695 ****** 2025-09-08 00:41:44.326700 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326735 | orchestrator | 2025-09-08 00:41:44.326745 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-08 00:41:44.326754 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.146) 0:00:44.841 ****** 2025-09-08 00:41:44.326764 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326773 | orchestrator | 2025-09-08 00:41:44.326783 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-08 00:41:44.326792 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.176) 0:00:45.017 ****** 2025-09-08 00:41:44.326802 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326811 | orchestrator | 2025-09-08 00:41:44.326821 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-08 00:41:44.326831 | orchestrator | Monday 08 September 2025 00:41:40 +0000 (0:00:00.140) 0:00:45.158 ****** 2025-09-08 00:41:44.326843 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326854 | orchestrator | 2025-09-08 00:41:44.326865 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-08 00:41:44.326876 | orchestrator | Monday 08 September 2025 00:41:41 +0000 (0:00:00.174) 0:00:45.332 ****** 2025-09-08 00:41:44.326906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.326920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.326932 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.326944 | orchestrator | 2025-09-08 00:41:44.326956 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-08 00:41:44.326967 | orchestrator | Monday 08 September 2025 00:41:41 +0000 (0:00:00.163) 0:00:45.496 ****** 2025-09-08 00:41:44.326978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.326990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327009 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327020 | orchestrator | 2025-09-08 00:41:44.327032 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-08 00:41:44.327044 | orchestrator | Monday 08 September 2025 00:41:41 +0000 (0:00:00.150) 0:00:45.647 ****** 2025-09-08 00:41:44.327055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327067 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327078 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327089 | orchestrator | 2025-09-08 00:41:44.327101 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-08 00:41:44.327112 | orchestrator | Monday 08 September 2025 00:41:41 +0000 (0:00:00.151) 0:00:45.798 ****** 2025-09-08 00:41:44.327123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327147 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327158 | orchestrator | 2025-09-08 00:41:44.327169 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-08 00:41:44.327198 | orchestrator | Monday 08 September 2025 00:41:41 +0000 (0:00:00.380) 0:00:46.178 ****** 2025-09-08 00:41:44.327208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327228 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327238 | orchestrator | 2025-09-08 00:41:44.327247 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-08 00:41:44.327257 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.157) 0:00:46.336 ****** 2025-09-08 00:41:44.327267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327286 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327295 | orchestrator | 2025-09-08 00:41:44.327306 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-08 00:41:44.327316 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.162) 0:00:46.499 ****** 2025-09-08 00:41:44.327326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327345 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327355 | orchestrator | 2025-09-08 00:41:44.327365 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-08 00:41:44.327374 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.172) 0:00:46.671 ****** 2025-09-08 00:41:44.327384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327410 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327420 | orchestrator | 2025-09-08 00:41:44.327430 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-08 00:41:44.327482 | orchestrator | Monday 08 September 2025 00:41:42 +0000 (0:00:00.176) 0:00:46.848 ****** 2025-09-08 00:41:44.327493 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:44.327503 | orchestrator | 2025-09-08 00:41:44.327512 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-08 00:41:44.327522 | orchestrator | Monday 08 September 2025 00:41:43 +0000 (0:00:00.525) 0:00:47.374 ****** 2025-09-08 00:41:44.327532 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:44.327541 | orchestrator | 2025-09-08 00:41:44.327551 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-08 00:41:44.327561 | orchestrator | Monday 08 September 2025 00:41:43 +0000 (0:00:00.533) 0:00:47.907 ****** 2025-09-08 00:41:44.327571 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:41:44.327580 | orchestrator | 2025-09-08 00:41:44.327590 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-08 00:41:44.327599 | orchestrator | Monday 08 September 2025 00:41:43 +0000 (0:00:00.151) 0:00:48.058 ****** 2025-09-08 00:41:44.327609 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'vg_name': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}) 2025-09-08 00:41:44.327620 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'vg_name': 'ceph-e84ec590-0593-5433-8536-9c5125166743'}) 2025-09-08 00:41:44.327629 | orchestrator | 2025-09-08 00:41:44.327639 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-08 00:41:44.327649 | orchestrator | Monday 08 September 2025 00:41:44 +0000 (0:00:00.176) 0:00:48.235 ****** 2025-09-08 00:41:44.327658 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327668 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327678 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:44.327688 | orchestrator | 2025-09-08 00:41:44.327697 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-08 00:41:44.327723 | orchestrator | Monday 08 September 2025 00:41:44 +0000 (0:00:00.157) 0:00:48.392 ****** 2025-09-08 00:41:44.327733 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:44.327743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:44.327759 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:50.695355 | orchestrator | 2025-09-08 00:41:50.695487 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-08 00:41:50.695504 | orchestrator | Monday 08 September 2025 00:41:44 +0000 (0:00:00.152) 0:00:48.544 ****** 2025-09-08 00:41:50.695517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'})  2025-09-08 00:41:50.695531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'})  2025-09-08 00:41:50.695542 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:41:50.695554 | orchestrator | 2025-09-08 00:41:50.695566 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-08 00:41:50.695577 | orchestrator | Monday 08 September 2025 00:41:44 +0000 (0:00:00.175) 0:00:48.720 ****** 2025-09-08 00:41:50.695614 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 00:41:50.695625 | orchestrator |  "lvm_report": { 2025-09-08 00:41:50.695638 | orchestrator |  "lv": [ 2025-09-08 00:41:50.695649 | orchestrator |  { 2025-09-08 00:41:50.695660 | orchestrator |  "lv_name": "osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a", 2025-09-08 00:41:50.695671 | orchestrator |  "vg_name": "ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a" 2025-09-08 00:41:50.695682 | orchestrator |  }, 2025-09-08 00:41:50.695693 | orchestrator |  { 2025-09-08 00:41:50.695729 | orchestrator |  "lv_name": "osd-block-e84ec590-0593-5433-8536-9c5125166743", 2025-09-08 00:41:50.695740 | orchestrator |  "vg_name": "ceph-e84ec590-0593-5433-8536-9c5125166743" 2025-09-08 00:41:50.695750 | orchestrator |  } 2025-09-08 00:41:50.695761 | orchestrator |  ], 2025-09-08 00:41:50.695772 | orchestrator |  "pv": [ 2025-09-08 00:41:50.695782 | orchestrator |  { 2025-09-08 00:41:50.695793 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-08 00:41:50.695804 | orchestrator |  "vg_name": "ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a" 2025-09-08 00:41:50.695815 | orchestrator |  }, 2025-09-08 00:41:50.695825 | orchestrator |  { 2025-09-08 00:41:50.695836 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-08 00:41:50.695847 | orchestrator |  "vg_name": "ceph-e84ec590-0593-5433-8536-9c5125166743" 2025-09-08 00:41:50.695858 | orchestrator |  } 2025-09-08 00:41:50.695869 | orchestrator |  ] 2025-09-08 00:41:50.695882 | orchestrator |  } 2025-09-08 00:41:50.695895 | orchestrator | } 2025-09-08 00:41:50.695907 | orchestrator | 2025-09-08 00:41:50.695919 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-08 00:41:50.695933 | orchestrator | 2025-09-08 00:41:50.695946 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-08 00:41:50.695959 | orchestrator | Monday 08 September 2025 00:41:45 +0000 (0:00:00.558) 0:00:49.278 ****** 2025-09-08 00:41:50.695972 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-08 00:41:50.695986 | orchestrator | 2025-09-08 00:41:50.696014 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-08 00:41:50.696027 | orchestrator | Monday 08 September 2025 00:41:45 +0000 (0:00:00.282) 0:00:49.560 ****** 2025-09-08 00:41:50.696040 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:41:50.696053 | orchestrator | 2025-09-08 00:41:50.696066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696079 | orchestrator | Monday 08 September 2025 00:41:45 +0000 (0:00:00.234) 0:00:49.795 ****** 2025-09-08 00:41:50.696092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:41:50.696104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:41:50.696117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:41:50.696129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:41:50.696142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:41:50.696154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:41:50.696166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:41:50.696179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:41:50.696192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-08 00:41:50.696204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:41:50.696218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:41:50.696238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:41:50.696249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:41:50.696260 | orchestrator | 2025-09-08 00:41:50.696270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696281 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.466) 0:00:50.262 ****** 2025-09-08 00:41:50.696292 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696302 | orchestrator | 2025-09-08 00:41:50.696318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696330 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.212) 0:00:50.475 ****** 2025-09-08 00:41:50.696341 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696351 | orchestrator | 2025-09-08 00:41:50.696362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696390 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.209) 0:00:50.684 ****** 2025-09-08 00:41:50.696402 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696413 | orchestrator | 2025-09-08 00:41:50.696424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696434 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.241) 0:00:50.926 ****** 2025-09-08 00:41:50.696445 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696456 | orchestrator | 2025-09-08 00:41:50.696467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696478 | orchestrator | Monday 08 September 2025 00:41:46 +0000 (0:00:00.206) 0:00:51.133 ****** 2025-09-08 00:41:50.696488 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696499 | orchestrator | 2025-09-08 00:41:50.696510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696520 | orchestrator | Monday 08 September 2025 00:41:47 +0000 (0:00:00.198) 0:00:51.331 ****** 2025-09-08 00:41:50.696531 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696542 | orchestrator | 2025-09-08 00:41:50.696552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696563 | orchestrator | Monday 08 September 2025 00:41:47 +0000 (0:00:00.617) 0:00:51.948 ****** 2025-09-08 00:41:50.696573 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696584 | orchestrator | 2025-09-08 00:41:50.696595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696606 | orchestrator | Monday 08 September 2025 00:41:47 +0000 (0:00:00.232) 0:00:52.181 ****** 2025-09-08 00:41:50.696616 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:41:50.696627 | orchestrator | 2025-09-08 00:41:50.696638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696648 | orchestrator | Monday 08 September 2025 00:41:48 +0000 (0:00:00.224) 0:00:52.406 ****** 2025-09-08 00:41:50.696659 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44) 2025-09-08 00:41:50.696671 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44) 2025-09-08 00:41:50.696682 | orchestrator | 2025-09-08 00:41:50.696693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696723 | orchestrator | Monday 08 September 2025 00:41:48 +0000 (0:00:00.411) 0:00:52.818 ****** 2025-09-08 00:41:50.696734 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e) 2025-09-08 00:41:50.696745 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e) 2025-09-08 00:41:50.696756 | orchestrator | 2025-09-08 00:41:50.696766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696777 | orchestrator | Monday 08 September 2025 00:41:49 +0000 (0:00:00.421) 0:00:53.240 ****** 2025-09-08 00:41:50.696793 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d) 2025-09-08 00:41:50.696812 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d) 2025-09-08 00:41:50.696823 | orchestrator | 2025-09-08 00:41:50.696833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696844 | orchestrator | Monday 08 September 2025 00:41:49 +0000 (0:00:00.423) 0:00:53.663 ****** 2025-09-08 00:41:50.696855 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962) 2025-09-08 00:41:50.696866 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962) 2025-09-08 00:41:50.696876 | orchestrator | 2025-09-08 00:41:50.696887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-08 00:41:50.696898 | orchestrator | Monday 08 September 2025 00:41:49 +0000 (0:00:00.456) 0:00:54.119 ****** 2025-09-08 00:41:50.696908 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-08 00:41:50.696919 | orchestrator | 2025-09-08 00:41:50.696930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:41:50.696940 | orchestrator | Monday 08 September 2025 00:41:50 +0000 (0:00:00.348) 0:00:54.468 ****** 2025-09-08 00:41:50.696951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-08 00:41:50.696962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-08 00:41:50.696972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-08 00:41:50.696983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-08 00:41:50.696994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-08 00:41:50.697004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-08 00:41:50.697015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-08 00:41:50.697025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-08 00:41:50.697036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-08 00:41:50.697046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-08 00:41:50.697057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-08 00:41:50.697075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-08 00:42:00.295258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-08 00:42:00.295395 | orchestrator | 2025-09-08 00:42:00.295412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295426 | orchestrator | Monday 08 September 2025 00:41:50 +0000 (0:00:00.435) 0:00:54.903 ****** 2025-09-08 00:42:00.295437 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295449 | orchestrator | 2025-09-08 00:42:00.295461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295472 | orchestrator | Monday 08 September 2025 00:41:50 +0000 (0:00:00.214) 0:00:55.118 ****** 2025-09-08 00:42:00.295483 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295493 | orchestrator | 2025-09-08 00:42:00.295504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295515 | orchestrator | Monday 08 September 2025 00:41:51 +0000 (0:00:00.246) 0:00:55.364 ****** 2025-09-08 00:42:00.295526 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295537 | orchestrator | 2025-09-08 00:42:00.295548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295559 | orchestrator | Monday 08 September 2025 00:41:51 +0000 (0:00:00.648) 0:00:56.013 ****** 2025-09-08 00:42:00.295596 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295607 | orchestrator | 2025-09-08 00:42:00.295618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295629 | orchestrator | Monday 08 September 2025 00:41:52 +0000 (0:00:00.221) 0:00:56.235 ****** 2025-09-08 00:42:00.295640 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295650 | orchestrator | 2025-09-08 00:42:00.295661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295672 | orchestrator | Monday 08 September 2025 00:41:52 +0000 (0:00:00.227) 0:00:56.463 ****** 2025-09-08 00:42:00.295683 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295720 | orchestrator | 2025-09-08 00:42:00.295732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295743 | orchestrator | Monday 08 September 2025 00:41:52 +0000 (0:00:00.211) 0:00:56.674 ****** 2025-09-08 00:42:00.295753 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295764 | orchestrator | 2025-09-08 00:42:00.295779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295791 | orchestrator | Monday 08 September 2025 00:41:52 +0000 (0:00:00.245) 0:00:56.919 ****** 2025-09-08 00:42:00.295804 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295817 | orchestrator | 2025-09-08 00:42:00.295830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295843 | orchestrator | Monday 08 September 2025 00:41:52 +0000 (0:00:00.241) 0:00:57.160 ****** 2025-09-08 00:42:00.295856 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-08 00:42:00.295870 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-08 00:42:00.295883 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-08 00:42:00.295896 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-08 00:42:00.295909 | orchestrator | 2025-09-08 00:42:00.295921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295934 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:00.718) 0:00:57.879 ****** 2025-09-08 00:42:00.295948 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.295960 | orchestrator | 2025-09-08 00:42:00.295973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.295987 | orchestrator | Monday 08 September 2025 00:41:53 +0000 (0:00:00.201) 0:00:58.080 ****** 2025-09-08 00:42:00.296000 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296014 | orchestrator | 2025-09-08 00:42:00.296026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.296040 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.203) 0:00:58.284 ****** 2025-09-08 00:42:00.296053 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296066 | orchestrator | 2025-09-08 00:42:00.296079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-08 00:42:00.296092 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.198) 0:00:58.482 ****** 2025-09-08 00:42:00.296105 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296117 | orchestrator | 2025-09-08 00:42:00.296128 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-08 00:42:00.296139 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.222) 0:00:58.705 ****** 2025-09-08 00:42:00.296150 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296160 | orchestrator | 2025-09-08 00:42:00.296171 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-08 00:42:00.296182 | orchestrator | Monday 08 September 2025 00:41:54 +0000 (0:00:00.377) 0:00:59.082 ****** 2025-09-08 00:42:00.296193 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}}) 2025-09-08 00:42:00.296204 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}}) 2025-09-08 00:42:00.296226 | orchestrator | 2025-09-08 00:42:00.296237 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-08 00:42:00.296247 | orchestrator | Monday 08 September 2025 00:41:55 +0000 (0:00:00.203) 0:00:59.286 ****** 2025-09-08 00:42:00.296259 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}) 2025-09-08 00:42:00.296272 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}) 2025-09-08 00:42:00.296283 | orchestrator | 2025-09-08 00:42:00.296294 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-08 00:42:00.296322 | orchestrator | Monday 08 September 2025 00:41:57 +0000 (0:00:01.964) 0:01:01.250 ****** 2025-09-08 00:42:00.296334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:00.296346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:00.296357 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296368 | orchestrator | 2025-09-08 00:42:00.296379 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-08 00:42:00.296389 | orchestrator | Monday 08 September 2025 00:41:57 +0000 (0:00:00.164) 0:01:01.415 ****** 2025-09-08 00:42:00.296400 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}) 2025-09-08 00:42:00.296432 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}) 2025-09-08 00:42:00.296444 | orchestrator | 2025-09-08 00:42:00.296455 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-08 00:42:00.296466 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:01.363) 0:01:02.778 ****** 2025-09-08 00:42:00.296477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:00.296488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:00.296499 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296510 | orchestrator | 2025-09-08 00:42:00.296521 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-08 00:42:00.296531 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.182) 0:01:02.961 ****** 2025-09-08 00:42:00.296542 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296553 | orchestrator | 2025-09-08 00:42:00.296564 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-08 00:42:00.296574 | orchestrator | Monday 08 September 2025 00:41:58 +0000 (0:00:00.144) 0:01:03.105 ****** 2025-09-08 00:42:00.296585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:00.296601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:00.296612 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296623 | orchestrator | 2025-09-08 00:42:00.296634 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-08 00:42:00.296645 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.201) 0:01:03.307 ****** 2025-09-08 00:42:00.296655 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296666 | orchestrator | 2025-09-08 00:42:00.296677 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-08 00:42:00.296715 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.161) 0:01:03.469 ****** 2025-09-08 00:42:00.296726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:00.296737 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:00.296748 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296759 | orchestrator | 2025-09-08 00:42:00.296770 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-08 00:42:00.296781 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.169) 0:01:03.639 ****** 2025-09-08 00:42:00.296791 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296802 | orchestrator | 2025-09-08 00:42:00.296813 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-08 00:42:00.296824 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.165) 0:01:03.805 ****** 2025-09-08 00:42:00.296834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:00.296845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:00.296856 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:00.296867 | orchestrator | 2025-09-08 00:42:00.296878 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-08 00:42:00.296888 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.173) 0:01:03.978 ****** 2025-09-08 00:42:00.296899 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:00.296910 | orchestrator | 2025-09-08 00:42:00.296921 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-08 00:42:00.296932 | orchestrator | Monday 08 September 2025 00:41:59 +0000 (0:00:00.161) 0:01:04.140 ****** 2025-09-08 00:42:00.296950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:06.523958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:06.524087 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524103 | orchestrator | 2025-09-08 00:42:06.524116 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-08 00:42:06.524129 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.370) 0:01:04.510 ****** 2025-09-08 00:42:06.524141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:06.524152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:06.524164 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524175 | orchestrator | 2025-09-08 00:42:06.524186 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-08 00:42:06.524198 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.164) 0:01:04.675 ****** 2025-09-08 00:42:06.524209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:06.524220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:06.524231 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524243 | orchestrator | 2025-09-08 00:42:06.524280 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-08 00:42:06.524291 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.161) 0:01:04.837 ****** 2025-09-08 00:42:06.524302 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524313 | orchestrator | 2025-09-08 00:42:06.524324 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-08 00:42:06.524335 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.154) 0:01:04.991 ****** 2025-09-08 00:42:06.524346 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524357 | orchestrator | 2025-09-08 00:42:06.524368 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-08 00:42:06.524379 | orchestrator | Monday 08 September 2025 00:42:00 +0000 (0:00:00.132) 0:01:05.124 ****** 2025-09-08 00:42:06.524390 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524400 | orchestrator | 2025-09-08 00:42:06.524411 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-08 00:42:06.524439 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.164) 0:01:05.289 ****** 2025-09-08 00:42:06.524450 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:06.524462 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-08 00:42:06.524476 | orchestrator | } 2025-09-08 00:42:06.524489 | orchestrator | 2025-09-08 00:42:06.524502 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-08 00:42:06.524515 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.144) 0:01:05.433 ****** 2025-09-08 00:42:06.524528 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:06.524541 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-08 00:42:06.524553 | orchestrator | } 2025-09-08 00:42:06.524567 | orchestrator | 2025-09-08 00:42:06.524580 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-08 00:42:06.524592 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.143) 0:01:05.577 ****** 2025-09-08 00:42:06.524606 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:06.524619 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-08 00:42:06.524633 | orchestrator | } 2025-09-08 00:42:06.524647 | orchestrator | 2025-09-08 00:42:06.524660 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-08 00:42:06.524673 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.143) 0:01:05.720 ****** 2025-09-08 00:42:06.524709 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:06.524723 | orchestrator | 2025-09-08 00:42:06.524736 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-08 00:42:06.524749 | orchestrator | Monday 08 September 2025 00:42:01 +0000 (0:00:00.502) 0:01:06.223 ****** 2025-09-08 00:42:06.524762 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:06.524775 | orchestrator | 2025-09-08 00:42:06.524788 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-08 00:42:06.524801 | orchestrator | Monday 08 September 2025 00:42:02 +0000 (0:00:00.537) 0:01:06.761 ****** 2025-09-08 00:42:06.524814 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:06.524827 | orchestrator | 2025-09-08 00:42:06.524838 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-08 00:42:06.524849 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.534) 0:01:07.296 ****** 2025-09-08 00:42:06.524860 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:06.524871 | orchestrator | 2025-09-08 00:42:06.524882 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-08 00:42:06.524893 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.346) 0:01:07.642 ****** 2025-09-08 00:42:06.524904 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524915 | orchestrator | 2025-09-08 00:42:06.524926 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-08 00:42:06.524937 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.117) 0:01:07.759 ****** 2025-09-08 00:42:06.524947 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.524967 | orchestrator | 2025-09-08 00:42:06.524978 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-08 00:42:06.524989 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.121) 0:01:07.881 ****** 2025-09-08 00:42:06.525000 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:06.525011 | orchestrator |  "vgs_report": { 2025-09-08 00:42:06.525022 | orchestrator |  "vg": [] 2025-09-08 00:42:06.525052 | orchestrator |  } 2025-09-08 00:42:06.525063 | orchestrator | } 2025-09-08 00:42:06.525074 | orchestrator | 2025-09-08 00:42:06.525085 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-08 00:42:06.525096 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.153) 0:01:08.034 ****** 2025-09-08 00:42:06.525107 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525118 | orchestrator | 2025-09-08 00:42:06.525129 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-08 00:42:06.525140 | orchestrator | Monday 08 September 2025 00:42:03 +0000 (0:00:00.142) 0:01:08.177 ****** 2025-09-08 00:42:06.525151 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525161 | orchestrator | 2025-09-08 00:42:06.525172 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-08 00:42:06.525183 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.147) 0:01:08.324 ****** 2025-09-08 00:42:06.525194 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525205 | orchestrator | 2025-09-08 00:42:06.525216 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-08 00:42:06.525227 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.140) 0:01:08.465 ****** 2025-09-08 00:42:06.525237 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525248 | orchestrator | 2025-09-08 00:42:06.525259 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-08 00:42:06.525270 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.142) 0:01:08.607 ****** 2025-09-08 00:42:06.525281 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525292 | orchestrator | 2025-09-08 00:42:06.525303 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-08 00:42:06.525313 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.130) 0:01:08.738 ****** 2025-09-08 00:42:06.525324 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525335 | orchestrator | 2025-09-08 00:42:06.525346 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-08 00:42:06.525357 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.134) 0:01:08.873 ****** 2025-09-08 00:42:06.525368 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525378 | orchestrator | 2025-09-08 00:42:06.525389 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-08 00:42:06.525400 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.128) 0:01:09.002 ****** 2025-09-08 00:42:06.525411 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525422 | orchestrator | 2025-09-08 00:42:06.525433 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-08 00:42:06.525444 | orchestrator | Monday 08 September 2025 00:42:04 +0000 (0:00:00.148) 0:01:09.150 ****** 2025-09-08 00:42:06.525455 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525466 | orchestrator | 2025-09-08 00:42:06.525476 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-08 00:42:06.525488 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.346) 0:01:09.497 ****** 2025-09-08 00:42:06.525504 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525516 | orchestrator | 2025-09-08 00:42:06.525526 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-08 00:42:06.525538 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.150) 0:01:09.647 ****** 2025-09-08 00:42:06.525548 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525559 | orchestrator | 2025-09-08 00:42:06.525570 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-08 00:42:06.525589 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.140) 0:01:09.788 ****** 2025-09-08 00:42:06.525600 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525610 | orchestrator | 2025-09-08 00:42:06.525621 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-08 00:42:06.525632 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.160) 0:01:09.948 ****** 2025-09-08 00:42:06.525643 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525654 | orchestrator | 2025-09-08 00:42:06.525665 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-08 00:42:06.525676 | orchestrator | Monday 08 September 2025 00:42:05 +0000 (0:00:00.146) 0:01:10.095 ****** 2025-09-08 00:42:06.525703 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525715 | orchestrator | 2025-09-08 00:42:06.525726 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-08 00:42:06.525737 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.143) 0:01:10.238 ****** 2025-09-08 00:42:06.525748 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:06.525759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:06.525770 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525781 | orchestrator | 2025-09-08 00:42:06.525792 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-08 00:42:06.525803 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.167) 0:01:10.406 ****** 2025-09-08 00:42:06.525814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:06.525825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:06.525836 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:06.525847 | orchestrator | 2025-09-08 00:42:06.525858 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-08 00:42:06.525869 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.159) 0:01:10.566 ****** 2025-09-08 00:42:06.525886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.655526 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.655648 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.655662 | orchestrator | 2025-09-08 00:42:09.655674 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-08 00:42:09.655739 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.173) 0:01:10.739 ****** 2025-09-08 00:42:09.655751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.655761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.655771 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.655781 | orchestrator | 2025-09-08 00:42:09.655791 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-08 00:42:09.655801 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.175) 0:01:10.914 ****** 2025-09-08 00:42:09.655811 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.655851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.655862 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.655871 | orchestrator | 2025-09-08 00:42:09.655881 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-08 00:42:09.655891 | orchestrator | Monday 08 September 2025 00:42:06 +0000 (0:00:00.163) 0:01:11.077 ****** 2025-09-08 00:42:09.655900 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.655910 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.655920 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.655929 | orchestrator | 2025-09-08 00:42:09.655939 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-08 00:42:09.655949 | orchestrator | Monday 08 September 2025 00:42:07 +0000 (0:00:00.162) 0:01:11.239 ****** 2025-09-08 00:42:09.655959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.655968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.655978 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.655988 | orchestrator | 2025-09-08 00:42:09.655997 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-08 00:42:09.656007 | orchestrator | Monday 08 September 2025 00:42:07 +0000 (0:00:00.393) 0:01:11.633 ****** 2025-09-08 00:42:09.656017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.656027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.656037 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.656049 | orchestrator | 2025-09-08 00:42:09.656060 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-08 00:42:09.656072 | orchestrator | Monday 08 September 2025 00:42:07 +0000 (0:00:00.158) 0:01:11.792 ****** 2025-09-08 00:42:09.656084 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:09.656097 | orchestrator | 2025-09-08 00:42:09.656109 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-08 00:42:09.656120 | orchestrator | Monday 08 September 2025 00:42:08 +0000 (0:00:00.525) 0:01:12.318 ****** 2025-09-08 00:42:09.656132 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:09.656143 | orchestrator | 2025-09-08 00:42:09.656154 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-08 00:42:09.656166 | orchestrator | Monday 08 September 2025 00:42:08 +0000 (0:00:00.533) 0:01:12.851 ****** 2025-09-08 00:42:09.656178 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:09.656189 | orchestrator | 2025-09-08 00:42:09.656201 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-08 00:42:09.656213 | orchestrator | Monday 08 September 2025 00:42:08 +0000 (0:00:00.153) 0:01:13.005 ****** 2025-09-08 00:42:09.656224 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'vg_name': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}) 2025-09-08 00:42:09.656238 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'vg_name': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}) 2025-09-08 00:42:09.656249 | orchestrator | 2025-09-08 00:42:09.656261 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-08 00:42:09.656280 | orchestrator | Monday 08 September 2025 00:42:08 +0000 (0:00:00.169) 0:01:13.174 ****** 2025-09-08 00:42:09.656309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.656321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.656334 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.656345 | orchestrator | 2025-09-08 00:42:09.656357 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-08 00:42:09.656368 | orchestrator | Monday 08 September 2025 00:42:09 +0000 (0:00:00.183) 0:01:13.357 ****** 2025-09-08 00:42:09.656379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.656391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.656402 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.656412 | orchestrator | 2025-09-08 00:42:09.656422 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-08 00:42:09.656431 | orchestrator | Monday 08 September 2025 00:42:09 +0000 (0:00:00.146) 0:01:13.504 ****** 2025-09-08 00:42:09.656441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'})  2025-09-08 00:42:09.656469 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'})  2025-09-08 00:42:09.656480 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:09.656489 | orchestrator | 2025-09-08 00:42:09.656499 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-08 00:42:09.656508 | orchestrator | Monday 08 September 2025 00:42:09 +0000 (0:00:00.168) 0:01:13.673 ****** 2025-09-08 00:42:09.656518 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 00:42:09.656527 | orchestrator |  "lvm_report": { 2025-09-08 00:42:09.656537 | orchestrator |  "lv": [ 2025-09-08 00:42:09.656547 | orchestrator |  { 2025-09-08 00:42:09.656557 | orchestrator |  "lv_name": "osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf", 2025-09-08 00:42:09.656567 | orchestrator |  "vg_name": "ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf" 2025-09-08 00:42:09.656576 | orchestrator |  }, 2025-09-08 00:42:09.656590 | orchestrator |  { 2025-09-08 00:42:09.656600 | orchestrator |  "lv_name": "osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2", 2025-09-08 00:42:09.656610 | orchestrator |  "vg_name": "ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2" 2025-09-08 00:42:09.656619 | orchestrator |  } 2025-09-08 00:42:09.656629 | orchestrator |  ], 2025-09-08 00:42:09.656639 | orchestrator |  "pv": [ 2025-09-08 00:42:09.656648 | orchestrator |  { 2025-09-08 00:42:09.656657 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-08 00:42:09.656667 | orchestrator |  "vg_name": "ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2" 2025-09-08 00:42:09.656676 | orchestrator |  }, 2025-09-08 00:42:09.656703 | orchestrator |  { 2025-09-08 00:42:09.656713 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-08 00:42:09.656723 | orchestrator |  "vg_name": "ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf" 2025-09-08 00:42:09.656732 | orchestrator |  } 2025-09-08 00:42:09.656742 | orchestrator |  ] 2025-09-08 00:42:09.656752 | orchestrator |  } 2025-09-08 00:42:09.656761 | orchestrator | } 2025-09-08 00:42:09.656771 | orchestrator | 2025-09-08 00:42:09.656781 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:42:09.656791 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-08 00:42:09.656807 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-08 00:42:09.656817 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-08 00:42:09.656826 | orchestrator | 2025-09-08 00:42:09.656836 | orchestrator | 2025-09-08 00:42:09.656846 | orchestrator | 2025-09-08 00:42:09.656855 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:42:09.656865 | orchestrator | Monday 08 September 2025 00:42:09 +0000 (0:00:00.176) 0:01:13.849 ****** 2025-09-08 00:42:09.656874 | orchestrator | =============================================================================== 2025-09-08 00:42:09.656884 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2025-09-08 00:42:09.656894 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2025-09-08 00:42:09.656903 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.96s 2025-09-08 00:42:09.656913 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.70s 2025-09-08 00:42:09.656922 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2025-09-08 00:42:09.656932 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.59s 2025-09-08 00:42:09.656941 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-09-08 00:42:09.656951 | orchestrator | Add known partitions to the list of available block devices ------------- 1.51s 2025-09-08 00:42:09.656967 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2025-09-08 00:42:10.180999 | orchestrator | Print LVM report data --------------------------------------------------- 1.05s 2025-09-08 00:42:10.181110 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-09-08 00:42:10.181123 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-09-08 00:42:10.181134 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.77s 2025-09-08 00:42:10.181144 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-09-08 00:42:10.181155 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-09-08 00:42:10.181165 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.71s 2025-09-08 00:42:10.181175 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.71s 2025-09-08 00:42:10.181186 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-09-08 00:42:10.181196 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-09-08 00:42:10.181206 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.70s 2025-09-08 00:42:22.485633 | orchestrator | 2025-09-08 00:42:22 | INFO  | Task 05e339ca-e6bc-460c-9581-be94e3febb21 (facts) was prepared for execution. 2025-09-08 00:42:22.485818 | orchestrator | 2025-09-08 00:42:22 | INFO  | It takes a moment until task 05e339ca-e6bc-460c-9581-be94e3febb21 (facts) has been started and output is visible here. 2025-09-08 00:42:34.520575 | orchestrator | 2025-09-08 00:42:34.520781 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-08 00:42:34.520801 | orchestrator | 2025-09-08 00:42:34.520813 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-08 00:42:34.520825 | orchestrator | Monday 08 September 2025 00:42:26 +0000 (0:00:00.274) 0:00:00.274 ****** 2025-09-08 00:42:34.520836 | orchestrator | ok: [testbed-manager] 2025-09-08 00:42:34.520848 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:42:34.520859 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:42:34.520900 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:42:34.520911 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:42:34.520922 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:42:34.520933 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:34.520943 | orchestrator | 2025-09-08 00:42:34.520955 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-08 00:42:34.520966 | orchestrator | Monday 08 September 2025 00:42:27 +0000 (0:00:01.081) 0:00:01.355 ****** 2025-09-08 00:42:34.520976 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:42:34.521007 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:42:34.521018 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:42:34.521030 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:42:34.521041 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:42:34.521052 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:34.521062 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:34.521073 | orchestrator | 2025-09-08 00:42:34.521084 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-08 00:42:34.521095 | orchestrator | 2025-09-08 00:42:34.521108 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-08 00:42:34.521121 | orchestrator | Monday 08 September 2025 00:42:28 +0000 (0:00:01.241) 0:00:02.597 ****** 2025-09-08 00:42:34.521134 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:42:34.521147 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:42:34.521159 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:42:34.521172 | orchestrator | ok: [testbed-manager] 2025-09-08 00:42:34.521185 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:42:34.521197 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:42:34.521210 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:42:34.521222 | orchestrator | 2025-09-08 00:42:34.521235 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-08 00:42:34.521248 | orchestrator | 2025-09-08 00:42:34.521260 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-08 00:42:34.521273 | orchestrator | Monday 08 September 2025 00:42:33 +0000 (0:00:04.728) 0:00:07.325 ****** 2025-09-08 00:42:34.521286 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:42:34.521298 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:42:34.521311 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:42:34.521324 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:42:34.521337 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:42:34.521351 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:42:34.521364 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:42:34.521376 | orchestrator | 2025-09-08 00:42:34.521389 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:42:34.521402 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:34.521416 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:34.521429 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:34.521442 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:34.521455 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:34.521467 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:34.521478 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:42:34.521489 | orchestrator | 2025-09-08 00:42:34.521500 | orchestrator | 2025-09-08 00:42:34.521519 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:42:34.521530 | orchestrator | Monday 08 September 2025 00:42:34 +0000 (0:00:00.538) 0:00:07.864 ****** 2025-09-08 00:42:34.521541 | orchestrator | =============================================================================== 2025-09-08 00:42:34.521552 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2025-09-08 00:42:34.521563 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-09-08 00:42:34.521573 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-09-08 00:42:34.521584 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-08 00:42:46.732961 | orchestrator | 2025-09-08 00:42:46 | INFO  | Task 16e4fb49-90b9-42f2-bbf3-55101196715f (frr) was prepared for execution. 2025-09-08 00:42:46.733095 | orchestrator | 2025-09-08 00:42:46 | INFO  | It takes a moment until task 16e4fb49-90b9-42f2-bbf3-55101196715f (frr) has been started and output is visible here. 2025-09-08 00:43:13.209716 | orchestrator | 2025-09-08 00:43:13.209849 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-08 00:43:13.209866 | orchestrator | 2025-09-08 00:43:13.209879 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-08 00:43:13.209891 | orchestrator | Monday 08 September 2025 00:42:50 +0000 (0:00:00.242) 0:00:00.242 ****** 2025-09-08 00:43:13.209903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:43:13.209916 | orchestrator | 2025-09-08 00:43:13.209927 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-08 00:43:13.209938 | orchestrator | Monday 08 September 2025 00:42:50 +0000 (0:00:00.228) 0:00:00.471 ****** 2025-09-08 00:43:13.209949 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:13.209960 | orchestrator | 2025-09-08 00:43:13.209971 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-08 00:43:13.209982 | orchestrator | Monday 08 September 2025 00:42:52 +0000 (0:00:01.216) 0:00:01.688 ****** 2025-09-08 00:43:13.209992 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:13.210003 | orchestrator | 2025-09-08 00:43:13.210014 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-08 00:43:13.210095 | orchestrator | Monday 08 September 2025 00:43:02 +0000 (0:00:10.135) 0:00:11.824 ****** 2025-09-08 00:43:13.210107 | orchestrator | ok: [testbed-manager] 2025-09-08 00:43:13.210119 | orchestrator | 2025-09-08 00:43:13.210129 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-08 00:43:13.210140 | orchestrator | Monday 08 September 2025 00:43:03 +0000 (0:00:01.327) 0:00:13.151 ****** 2025-09-08 00:43:13.210151 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:13.210161 | orchestrator | 2025-09-08 00:43:13.210172 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-08 00:43:13.210183 | orchestrator | Monday 08 September 2025 00:43:04 +0000 (0:00:00.987) 0:00:14.138 ****** 2025-09-08 00:43:13.210193 | orchestrator | ok: [testbed-manager] 2025-09-08 00:43:13.210204 | orchestrator | 2025-09-08 00:43:13.210215 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-08 00:43:13.210226 | orchestrator | Monday 08 September 2025 00:43:05 +0000 (0:00:01.190) 0:00:15.329 ****** 2025-09-08 00:43:13.210237 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:43:13.210248 | orchestrator | 2025-09-08 00:43:13.210259 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-08 00:43:13.210269 | orchestrator | Monday 08 September 2025 00:43:06 +0000 (0:00:00.853) 0:00:16.182 ****** 2025-09-08 00:43:13.210280 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:43:13.210291 | orchestrator | 2025-09-08 00:43:13.210301 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-08 00:43:13.210312 | orchestrator | Monday 08 September 2025 00:43:06 +0000 (0:00:00.165) 0:00:16.348 ****** 2025-09-08 00:43:13.210345 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:13.210356 | orchestrator | 2025-09-08 00:43:13.210367 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-08 00:43:13.210378 | orchestrator | Monday 08 September 2025 00:43:07 +0000 (0:00:01.003) 0:00:17.351 ****** 2025-09-08 00:43:13.210389 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-08 00:43:13.210399 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-08 00:43:13.210411 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-08 00:43:13.210422 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-08 00:43:13.210432 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-08 00:43:13.210443 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-08 00:43:13.210454 | orchestrator | 2025-09-08 00:43:13.210464 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-08 00:43:13.210475 | orchestrator | Monday 08 September 2025 00:43:10 +0000 (0:00:02.168) 0:00:19.519 ****** 2025-09-08 00:43:13.210486 | orchestrator | ok: [testbed-manager] 2025-09-08 00:43:13.210496 | orchestrator | 2025-09-08 00:43:13.210506 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-08 00:43:13.210517 | orchestrator | Monday 08 September 2025 00:43:11 +0000 (0:00:01.419) 0:00:20.939 ****** 2025-09-08 00:43:13.210528 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:13.210538 | orchestrator | 2025-09-08 00:43:13.210549 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:43:13.210560 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 00:43:13.210570 | orchestrator | 2025-09-08 00:43:13.210581 | orchestrator | 2025-09-08 00:43:13.210592 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:43:13.210602 | orchestrator | Monday 08 September 2025 00:43:12 +0000 (0:00:01.512) 0:00:22.451 ****** 2025-09-08 00:43:13.210613 | orchestrator | =============================================================================== 2025-09-08 00:43:13.210623 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.14s 2025-09-08 00:43:13.210653 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.17s 2025-09-08 00:43:13.210664 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.51s 2025-09-08 00:43:13.210675 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.42s 2025-09-08 00:43:13.210702 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.33s 2025-09-08 00:43:13.210713 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.22s 2025-09-08 00:43:13.210723 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2025-09-08 00:43:13.210734 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.00s 2025-09-08 00:43:13.210744 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.99s 2025-09-08 00:43:13.210755 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.85s 2025-09-08 00:43:13.210765 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-09-08 00:43:13.210776 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-09-08 00:43:13.497313 | orchestrator | 2025-09-08 00:43:13.499154 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Sep 8 00:43:13 UTC 2025 2025-09-08 00:43:13.499191 | orchestrator | 2025-09-08 00:43:15.346576 | orchestrator | 2025-09-08 00:43:15 | INFO  | Collection nutshell is prepared for execution 2025-09-08 00:43:15.346762 | orchestrator | 2025-09-08 00:43:15 | INFO  | D [0] - dotfiles 2025-09-08 00:43:25.406265 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [0] - homer 2025-09-08 00:43:25.406411 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [0] - netdata 2025-09-08 00:43:25.406426 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [0] - openstackclient 2025-09-08 00:43:25.407072 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [0] - phpmyadmin 2025-09-08 00:43:25.407182 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [0] - common 2025-09-08 00:43:25.410769 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [1] -- loadbalancer 2025-09-08 00:43:25.411169 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [2] --- opensearch 2025-09-08 00:43:25.411198 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [2] --- mariadb-ng 2025-09-08 00:43:25.411423 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [3] ---- horizon 2025-09-08 00:43:25.411446 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [3] ---- keystone 2025-09-08 00:43:25.411458 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [4] ----- neutron 2025-09-08 00:43:25.412094 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [5] ------ wait-for-nova 2025-09-08 00:43:25.412117 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [5] ------ octavia 2025-09-08 00:43:25.413398 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [4] ----- barbican 2025-09-08 00:43:25.413585 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [4] ----- designate 2025-09-08 00:43:25.413617 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [4] ----- ironic 2025-09-08 00:43:25.413662 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [4] ----- placement 2025-09-08 00:43:25.413760 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [4] ----- magnum 2025-09-08 00:43:25.413978 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [1] -- openvswitch 2025-09-08 00:43:25.414332 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [2] --- ovn 2025-09-08 00:43:25.414663 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [1] -- memcached 2025-09-08 00:43:25.414685 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [1] -- redis 2025-09-08 00:43:25.415034 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [1] -- rabbitmq-ng 2025-09-08 00:43:25.415278 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [0] - kubernetes 2025-09-08 00:43:25.417234 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [1] -- kubeconfig 2025-09-08 00:43:25.417255 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [1] -- copy-kubeconfig 2025-09-08 00:43:25.417495 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [0] - ceph 2025-09-08 00:43:25.420190 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [1] -- ceph-pools 2025-09-08 00:43:25.420442 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [2] --- copy-ceph-keys 2025-09-08 00:43:25.420464 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [3] ---- cephclient 2025-09-08 00:43:25.420477 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-08 00:43:25.420489 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [4] ----- wait-for-keystone 2025-09-08 00:43:25.420500 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-08 00:43:25.420529 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [5] ------ glance 2025-09-08 00:43:25.420562 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [5] ------ cinder 2025-09-08 00:43:25.420575 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [5] ------ nova 2025-09-08 00:43:25.420586 | orchestrator | 2025-09-08 00:43:25 | INFO  | A [4] ----- prometheus 2025-09-08 00:43:25.420746 | orchestrator | 2025-09-08 00:43:25 | INFO  | D [5] ------ grafana 2025-09-08 00:43:25.646796 | orchestrator | 2025-09-08 00:43:25 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-08 00:43:25.648148 | orchestrator | 2025-09-08 00:43:25 | INFO  | Tasks are running in the background 2025-09-08 00:43:28.662798 | orchestrator | 2025-09-08 00:43:28 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-08 00:43:30.817089 | orchestrator | 2025-09-08 00:43:30 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:30.817215 | orchestrator | 2025-09-08 00:43:30 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:30.817232 | orchestrator | 2025-09-08 00:43:30 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:30.831177 | orchestrator | 2025-09-08 00:43:30 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:30.831224 | orchestrator | 2025-09-08 00:43:30 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:30.831236 | orchestrator | 2025-09-08 00:43:30 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:30.831248 | orchestrator | 2025-09-08 00:43:30 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:30.831260 | orchestrator | 2025-09-08 00:43:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:33.871535 | orchestrator | 2025-09-08 00:43:33 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:33.871694 | orchestrator | 2025-09-08 00:43:33 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:33.871708 | orchestrator | 2025-09-08 00:43:33 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:33.871719 | orchestrator | 2025-09-08 00:43:33 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:33.871729 | orchestrator | 2025-09-08 00:43:33 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:33.871739 | orchestrator | 2025-09-08 00:43:33 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:33.871748 | orchestrator | 2025-09-08 00:43:33 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:33.871759 | orchestrator | 2025-09-08 00:43:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:36.903371 | orchestrator | 2025-09-08 00:43:36 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:36.903700 | orchestrator | 2025-09-08 00:43:36 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:36.906857 | orchestrator | 2025-09-08 00:43:36 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:36.907733 | orchestrator | 2025-09-08 00:43:36 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:36.911555 | orchestrator | 2025-09-08 00:43:36 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:36.914420 | orchestrator | 2025-09-08 00:43:36 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:36.914981 | orchestrator | 2025-09-08 00:43:36 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:36.915005 | orchestrator | 2025-09-08 00:43:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:39.949836 | orchestrator | 2025-09-08 00:43:39 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:39.949979 | orchestrator | 2025-09-08 00:43:39 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:39.949994 | orchestrator | 2025-09-08 00:43:39 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:39.950006 | orchestrator | 2025-09-08 00:43:39 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:39.950070 | orchestrator | 2025-09-08 00:43:39 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:39.950083 | orchestrator | 2025-09-08 00:43:39 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:39.950094 | orchestrator | 2025-09-08 00:43:39 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:39.950105 | orchestrator | 2025-09-08 00:43:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:42.984998 | orchestrator | 2025-09-08 00:43:42 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:42.985238 | orchestrator | 2025-09-08 00:43:42 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:42.985951 | orchestrator | 2025-09-08 00:43:42 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:42.987925 | orchestrator | 2025-09-08 00:43:42 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:42.988598 | orchestrator | 2025-09-08 00:43:42 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:42.990601 | orchestrator | 2025-09-08 00:43:42 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:42.991188 | orchestrator | 2025-09-08 00:43:42 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:42.991377 | orchestrator | 2025-09-08 00:43:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:46.250178 | orchestrator | 2025-09-08 00:43:46 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:46.253902 | orchestrator | 2025-09-08 00:43:46 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:46.254350 | orchestrator | 2025-09-08 00:43:46 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:46.255172 | orchestrator | 2025-09-08 00:43:46 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:46.257042 | orchestrator | 2025-09-08 00:43:46 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:46.257397 | orchestrator | 2025-09-08 00:43:46 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:46.258414 | orchestrator | 2025-09-08 00:43:46 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:46.258535 | orchestrator | 2025-09-08 00:43:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:49.477740 | orchestrator | 2025-09-08 00:43:49 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:49.477870 | orchestrator | 2025-09-08 00:43:49 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:49.477887 | orchestrator | 2025-09-08 00:43:49 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:49.477899 | orchestrator | 2025-09-08 00:43:49 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:49.477942 | orchestrator | 2025-09-08 00:43:49 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:49.477954 | orchestrator | 2025-09-08 00:43:49 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:49.477965 | orchestrator | 2025-09-08 00:43:49 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:49.477977 | orchestrator | 2025-09-08 00:43:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:52.443954 | orchestrator | 2025-09-08 00:43:52 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:52.446771 | orchestrator | 2025-09-08 00:43:52 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:52.446806 | orchestrator | 2025-09-08 00:43:52 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:52.447574 | orchestrator | 2025-09-08 00:43:52 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:52.448721 | orchestrator | 2025-09-08 00:43:52 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:52.449637 | orchestrator | 2025-09-08 00:43:52 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:52.450398 | orchestrator | 2025-09-08 00:43:52 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state STARTED 2025-09-08 00:43:52.450420 | orchestrator | 2025-09-08 00:43:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:55.500365 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:55.500906 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:55.511318 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:55.519295 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:55.607391 | orchestrator | 2025-09-08 00:43:55.607431 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-08 00:43:55.607444 | orchestrator | 2025-09-08 00:43:55.607456 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-08 00:43:55.607468 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:00.497) 0:00:00.497 ****** 2025-09-08 00:43:55.607479 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:43:55.607491 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:43:55.607503 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:43:55.607514 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:43:55.607525 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:43:55.607536 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:43:55.607547 | orchestrator | changed: [testbed-manager] 2025-09-08 00:43:55.607558 | orchestrator | 2025-09-08 00:43:55.607569 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-08 00:43:55.607580 | orchestrator | Monday 08 September 2025 00:43:44 +0000 (0:00:04.283) 0:00:04.780 ****** 2025-09-08 00:43:55.607593 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-08 00:43:55.607655 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-08 00:43:55.607668 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-08 00:43:55.607679 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-08 00:43:55.607690 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-08 00:43:55.607701 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-08 00:43:55.607711 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-08 00:43:55.607731 | orchestrator | 2025-09-08 00:43:55.607743 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-08 00:43:55.607780 | orchestrator | Monday 08 September 2025 00:43:45 +0000 (0:00:01.838) 0:00:06.618 ****** 2025-09-08 00:43:55.607797 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:43:45.516029', 'end': '2025-09-08 00:43:45.522511', 'delta': '0:00:00.006482', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:43:55.607818 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:43:45.173896', 'end': '2025-09-08 00:43:45.182843', 'delta': '0:00:00.008947', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:43:55.607830 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:43:45.266536', 'end': '2025-09-08 00:43:45.273719', 'delta': '0:00:00.007183', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:43:55.607866 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:43:45.565055', 'end': '2025-09-08 00:43:45.571058', 'delta': '0:00:00.006003', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:43:55.607883 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:43:45.602349', 'end': '2025-09-08 00:43:45.610935', 'delta': '0:00:00.008586', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:43:55.607909 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:43:45.454027', 'end': '2025-09-08 00:43:45.463719', 'delta': '0:00:00.009692', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:43:55.607921 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-08 00:43:45.618983', 'end': '2025-09-08 00:43:45.624913', 'delta': '0:00:00.005930', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-08 00:43:55.607933 | orchestrator | 2025-09-08 00:43:55.607944 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-08 00:43:55.607955 | orchestrator | Monday 08 September 2025 00:43:48 +0000 (0:00:02.133) 0:00:08.751 ****** 2025-09-08 00:43:55.607966 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-08 00:43:55.607978 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-08 00:43:55.607992 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-08 00:43:55.608004 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-08 00:43:55.608018 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-08 00:43:55.608030 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-08 00:43:55.608043 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-08 00:43:55.608055 | orchestrator | 2025-09-08 00:43:55.608069 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-08 00:43:55.608082 | orchestrator | Monday 08 September 2025 00:43:49 +0000 (0:00:01.473) 0:00:10.225 ****** 2025-09-08 00:43:55.608095 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-08 00:43:55.608107 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-08 00:43:55.608120 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-08 00:43:55.608133 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-08 00:43:55.608145 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-08 00:43:55.608158 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-08 00:43:55.608171 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-08 00:43:55.608183 | orchestrator | 2025-09-08 00:43:55.608197 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:43:55.608217 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:43:55.608240 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:43:55.608254 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:43:55.608268 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:43:55.608281 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:43:55.608294 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:43:55.608311 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:43:55.608325 | orchestrator | 2025-09-08 00:43:55.608338 | orchestrator | 2025-09-08 00:43:55.608349 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:43:55.608360 | orchestrator | Monday 08 September 2025 00:43:52 +0000 (0:00:03.164) 0:00:13.389 ****** 2025-09-08 00:43:55.608371 | orchestrator | =============================================================================== 2025-09-08 00:43:55.608382 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.28s 2025-09-08 00:43:55.608393 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.16s 2025-09-08 00:43:55.608404 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.13s 2025-09-08 00:43:55.608415 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.84s 2025-09-08 00:43:55.608426 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.47s 2025-09-08 00:43:55.608437 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:43:55.608448 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:55.608459 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:55.608470 | orchestrator | 2025-09-08 00:43:55 | INFO  | Task 19c93fbd-1ead-4295-a31c-0b161b8d11c2 is in state SUCCESS 2025-09-08 00:43:55.608482 | orchestrator | 2025-09-08 00:43:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:43:58.609010 | orchestrator | 2025-09-08 00:43:58 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:43:58.609119 | orchestrator | 2025-09-08 00:43:58 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:43:58.609133 | orchestrator | 2025-09-08 00:43:58 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:43:58.609160 | orchestrator | 2025-09-08 00:43:58 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:43:58.609171 | orchestrator | 2025-09-08 00:43:58 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:43:58.609193 | orchestrator | 2025-09-08 00:43:58 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:43:58.609868 | orchestrator | 2025-09-08 00:43:58 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:43:58.609890 | orchestrator | 2025-09-08 00:43:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:01.644426 | orchestrator | 2025-09-08 00:44:01 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:01.644543 | orchestrator | 2025-09-08 00:44:01 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:44:01.645275 | orchestrator | 2025-09-08 00:44:01 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:01.646861 | orchestrator | 2025-09-08 00:44:01 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:01.648591 | orchestrator | 2025-09-08 00:44:01 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:01.649182 | orchestrator | 2025-09-08 00:44:01 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:01.651515 | orchestrator | 2025-09-08 00:44:01 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:01.651944 | orchestrator | 2025-09-08 00:44:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:04.703482 | orchestrator | 2025-09-08 00:44:04 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:04.703968 | orchestrator | 2025-09-08 00:44:04 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:44:04.704006 | orchestrator | 2025-09-08 00:44:04 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:04.704409 | orchestrator | 2025-09-08 00:44:04 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:04.705048 | orchestrator | 2025-09-08 00:44:04 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:04.705549 | orchestrator | 2025-09-08 00:44:04 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:04.706130 | orchestrator | 2025-09-08 00:44:04 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:04.706157 | orchestrator | 2025-09-08 00:44:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:07.750222 | orchestrator | 2025-09-08 00:44:07 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:07.754944 | orchestrator | 2025-09-08 00:44:07 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:44:07.755964 | orchestrator | 2025-09-08 00:44:07 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:07.761447 | orchestrator | 2025-09-08 00:44:07 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:07.763352 | orchestrator | 2025-09-08 00:44:07 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:07.764789 | orchestrator | 2025-09-08 00:44:07 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:07.765478 | orchestrator | 2025-09-08 00:44:07 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:07.765501 | orchestrator | 2025-09-08 00:44:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:10.804203 | orchestrator | 2025-09-08 00:44:10 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:10.806470 | orchestrator | 2025-09-08 00:44:10 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:44:10.881091 | orchestrator | 2025-09-08 00:44:10 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:10.881155 | orchestrator | 2025-09-08 00:44:10 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:10.881169 | orchestrator | 2025-09-08 00:44:10 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:10.881198 | orchestrator | 2025-09-08 00:44:10 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:10.881242 | orchestrator | 2025-09-08 00:44:10 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:10.881254 | orchestrator | 2025-09-08 00:44:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:13.854166 | orchestrator | 2025-09-08 00:44:13 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:13.854295 | orchestrator | 2025-09-08 00:44:13 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:44:13.854309 | orchestrator | 2025-09-08 00:44:13 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:13.854320 | orchestrator | [32m2025-09-08 00:44:13 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:13.854331 | orchestrator | 2025-09-08 00:44:13 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:13.854343 | orchestrator | 2025-09-08 00:44:13 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:13.854354 | orchestrator | 2025-09-08 00:44:13 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:13.854365 | orchestrator | 2025-09-08 00:44:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:16.983321 | orchestrator | 2025-09-08 00:44:16 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:16.983442 | orchestrator | 2025-09-08 00:44:16 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:44:16.983456 | orchestrator | 2025-09-08 00:44:16 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:16.983873 | orchestrator | 2025-09-08 00:44:16 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:16.983891 | orchestrator | 2025-09-08 00:44:16 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:16.983904 | orchestrator | 2025-09-08 00:44:16 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:16.983916 | orchestrator | 2025-09-08 00:44:16 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:16.983930 | orchestrator | 2025-09-08 00:44:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:20.026147 | orchestrator | 2025-09-08 00:44:20 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:20.026270 | orchestrator | 2025-09-08 00:44:20 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state STARTED 2025-09-08 00:44:20.026283 | orchestrator | 2025-09-08 00:44:20 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:20.026295 | orchestrator | 2025-09-08 00:44:20 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:20.026307 | orchestrator | 2025-09-08 00:44:20 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:20.026318 | orchestrator | 2025-09-08 00:44:20 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:20.027962 | orchestrator | 2025-09-08 00:44:20 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:20.027985 | orchestrator | 2025-09-08 00:44:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:23.076527 | orchestrator | 2025-09-08 00:44:23 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:23.077202 | orchestrator | 2025-09-08 00:44:23 | INFO  | Task e8f0b639-ffa9-4e9f-9c4a-ea195d1fb18e is in state SUCCESS 2025-09-08 00:44:23.078199 | orchestrator | 2025-09-08 00:44:23 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:23.078968 | orchestrator | 2025-09-08 00:44:23 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:23.081140 | orchestrator | 2025-09-08 00:44:23 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:23.081178 | orchestrator | 2025-09-08 00:44:23 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state STARTED 2025-09-08 00:44:23.082344 | orchestrator | 2025-09-08 00:44:23 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:23.082367 | orchestrator | 2025-09-08 00:44:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:26.125921 | orchestrator | 2025-09-08 00:44:26 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:26.126240 | orchestrator | 2025-09-08 00:44:26 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:26.126778 | orchestrator | 2025-09-08 00:44:26 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:26.127522 | orchestrator | 2025-09-08 00:44:26 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:26.127565 | orchestrator | 2025-09-08 00:44:26 | INFO  | Task 54c1d6bf-9958-47ef-b567-4163513680b0 is in state SUCCESS 2025-09-08 00:44:26.128573 | orchestrator | 2025-09-08 00:44:26 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:26.128656 | orchestrator | 2025-09-08 00:44:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:29.185820 | orchestrator | 2025-09-08 00:44:29 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:29.185926 | orchestrator | 2025-09-08 00:44:29 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:29.185942 | orchestrator | 2025-09-08 00:44:29 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:29.185954 | orchestrator | 2025-09-08 00:44:29 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:29.185965 | orchestrator | 2025-09-08 00:44:29 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:29.185976 | orchestrator | 2025-09-08 00:44:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:32.234316 | orchestrator | 2025-09-08 00:44:32 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:32.235015 | orchestrator | 2025-09-08 00:44:32 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:32.235048 | orchestrator | 2025-09-08 00:44:32 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:32.235062 | orchestrator | 2025-09-08 00:44:32 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:32.235076 | orchestrator | 2025-09-08 00:44:32 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:32.235090 | orchestrator | 2025-09-08 00:44:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:35.296431 | orchestrator | 2025-09-08 00:44:35 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:35.302086 | orchestrator | 2025-09-08 00:44:35 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:35.307003 | orchestrator | 2025-09-08 00:44:35 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:35.315843 | orchestrator | 2025-09-08 00:44:35 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:35.323338 | orchestrator | 2025-09-08 00:44:35 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:35.324052 | orchestrator | 2025-09-08 00:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:38.395269 | orchestrator | 2025-09-08 00:44:38 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:38.398075 | orchestrator | 2025-09-08 00:44:38 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:38.401195 | orchestrator | 2025-09-08 00:44:38 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:38.405177 | orchestrator | 2025-09-08 00:44:38 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:38.407487 | orchestrator | 2025-09-08 00:44:38 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:38.407648 | orchestrator | 2025-09-08 00:44:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:41.446734 | orchestrator | 2025-09-08 00:44:41 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:41.446838 | orchestrator | 2025-09-08 00:44:41 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:41.448139 | orchestrator | 2025-09-08 00:44:41 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:41.449164 | orchestrator | 2025-09-08 00:44:41 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:41.451751 | orchestrator | 2025-09-08 00:44:41 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:41.453020 | orchestrator | 2025-09-08 00:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:44.504786 | orchestrator | 2025-09-08 00:44:44 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:44.504903 | orchestrator | 2025-09-08 00:44:44 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:44.506506 | orchestrator | 2025-09-08 00:44:44 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:44.507455 | orchestrator | 2025-09-08 00:44:44 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:44.508912 | orchestrator | 2025-09-08 00:44:44 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:44.508943 | orchestrator | 2025-09-08 00:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:47.550523 | orchestrator | 2025-09-08 00:44:47 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:47.552568 | orchestrator | 2025-09-08 00:44:47 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:47.554788 | orchestrator | 2025-09-08 00:44:47 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:47.556514 | orchestrator | 2025-09-08 00:44:47 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:47.557552 | orchestrator | 2025-09-08 00:44:47 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:47.557573 | orchestrator | 2025-09-08 00:44:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:50.705006 | orchestrator | 2025-09-08 00:44:50 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:50.705150 | orchestrator | 2025-09-08 00:44:50 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:50.705167 | orchestrator | 2025-09-08 00:44:50 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:50.705180 | orchestrator | 2025-09-08 00:44:50 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:50.705191 | orchestrator | 2025-09-08 00:44:50 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:50.705202 | orchestrator | 2025-09-08 00:44:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:53.735628 | orchestrator | 2025-09-08 00:44:53 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:53.737193 | orchestrator | 2025-09-08 00:44:53 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:53.738262 | orchestrator | 2025-09-08 00:44:53 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:53.740553 | orchestrator | 2025-09-08 00:44:53 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state STARTED 2025-09-08 00:44:53.742744 | orchestrator | 2025-09-08 00:44:53 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:53.742790 | orchestrator | 2025-09-08 00:44:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:56.810814 | orchestrator | 2025-09-08 00:44:56.810931 | orchestrator | 2025-09-08 00:44:56.810949 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-08 00:44:56.810962 | orchestrator | 2025-09-08 00:44:56.810974 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-08 00:44:56.810986 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:00.714) 0:00:00.714 ****** 2025-09-08 00:44:56.810998 | orchestrator | ok: [testbed-manager] => { 2025-09-08 00:44:56.811011 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-08 00:44:56.811023 | orchestrator | } 2025-09-08 00:44:56.811035 | orchestrator | 2025-09-08 00:44:56.811046 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-08 00:44:56.811085 | orchestrator | Monday 08 September 2025 00:43:40 +0000 (0:00:00.640) 0:00:01.354 ****** 2025-09-08 00:44:56.811097 | orchestrator | ok: [testbed-manager] 2025-09-08 00:44:56.811109 | orchestrator | 2025-09-08 00:44:56.811121 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-08 00:44:56.811132 | orchestrator | Monday 08 September 2025 00:43:42 +0000 (0:00:02.158) 0:00:03.513 ****** 2025-09-08 00:44:56.811160 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-08 00:44:56.811172 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-08 00:44:56.811183 | orchestrator | 2025-09-08 00:44:56.811195 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-08 00:44:56.811206 | orchestrator | Monday 08 September 2025 00:43:43 +0000 (0:00:01.388) 0:00:04.901 ****** 2025-09-08 00:44:56.811217 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.811228 | orchestrator | 2025-09-08 00:44:56.811239 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-08 00:44:56.811250 | orchestrator | Monday 08 September 2025 00:43:48 +0000 (0:00:04.815) 0:00:09.717 ****** 2025-09-08 00:44:56.811261 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.811272 | orchestrator | 2025-09-08 00:44:56.811284 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-08 00:44:56.811295 | orchestrator | Monday 08 September 2025 00:43:51 +0000 (0:00:02.549) 0:00:12.266 ****** 2025-09-08 00:44:56.811306 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-08 00:44:56.811339 | orchestrator | ok: [testbed-manager] 2025-09-08 00:44:56.811352 | orchestrator | 2025-09-08 00:44:56.811365 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-08 00:44:56.811378 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:26.659) 0:00:38.926 ****** 2025-09-08 00:44:56.811390 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.811403 | orchestrator | 2025-09-08 00:44:56.811415 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:44:56.811428 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:56.811443 | orchestrator | 2025-09-08 00:44:56.811455 | orchestrator | 2025-09-08 00:44:56.811469 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:44:56.811481 | orchestrator | Monday 08 September 2025 00:44:19 +0000 (0:00:02.071) 0:00:40.998 ****** 2025-09-08 00:44:56.811494 | orchestrator | =============================================================================== 2025-09-08 00:44:56.811507 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.66s 2025-09-08 00:44:56.811519 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.82s 2025-09-08 00:44:56.811533 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.55s 2025-09-08 00:44:56.811546 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.16s 2025-09-08 00:44:56.811559 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.07s 2025-09-08 00:44:56.811596 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.39s 2025-09-08 00:44:56.811607 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.64s 2025-09-08 00:44:56.811618 | orchestrator | 2025-09-08 00:44:56.811629 | orchestrator | 2025-09-08 00:44:56.811640 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-08 00:44:56.811651 | orchestrator | 2025-09-08 00:44:56.811661 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-08 00:44:56.811672 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:00.981) 0:00:00.982 ****** 2025-09-08 00:44:56.811683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-08 00:44:56.811695 | orchestrator | 2025-09-08 00:44:56.811707 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-08 00:44:56.811718 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:00.448) 0:00:01.430 ****** 2025-09-08 00:44:56.811729 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-08 00:44:56.811739 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-08 00:44:56.811750 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-08 00:44:56.811761 | orchestrator | 2025-09-08 00:44:56.811772 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-08 00:44:56.811783 | orchestrator | Monday 08 September 2025 00:43:43 +0000 (0:00:03.202) 0:00:04.633 ****** 2025-09-08 00:44:56.811794 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.811805 | orchestrator | 2025-09-08 00:44:56.811816 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-08 00:44:56.811827 | orchestrator | Monday 08 September 2025 00:43:45 +0000 (0:00:02.525) 0:00:07.158 ****** 2025-09-08 00:44:56.811854 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-08 00:44:56.811866 | orchestrator | ok: [testbed-manager] 2025-09-08 00:44:56.811877 | orchestrator | 2025-09-08 00:44:56.811888 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-08 00:44:56.811898 | orchestrator | Monday 08 September 2025 00:44:20 +0000 (0:00:34.579) 0:00:41.738 ****** 2025-09-08 00:44:56.811909 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.811928 | orchestrator | 2025-09-08 00:44:56.811939 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-08 00:44:56.811950 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.818) 0:00:42.556 ****** 2025-09-08 00:44:56.811961 | orchestrator | ok: [testbed-manager] 2025-09-08 00:44:56.811972 | orchestrator | 2025-09-08 00:44:56.811983 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-08 00:44:56.811993 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.633) 0:00:43.190 ****** 2025-09-08 00:44:56.812004 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.812015 | orchestrator | 2025-09-08 00:44:56.812026 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-08 00:44:56.812037 | orchestrator | Monday 08 September 2025 00:44:23 +0000 (0:00:02.009) 0:00:45.199 ****** 2025-09-08 00:44:56.812053 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.812064 | orchestrator | 2025-09-08 00:44:56.812075 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-08 00:44:56.812086 | orchestrator | Monday 08 September 2025 00:44:24 +0000 (0:00:00.715) 0:00:45.915 ****** 2025-09-08 00:44:56.812097 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.812108 | orchestrator | 2025-09-08 00:44:56.812119 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-08 00:44:56.812129 | orchestrator | Monday 08 September 2025 00:44:25 +0000 (0:00:00.961) 0:00:46.876 ****** 2025-09-08 00:44:56.812140 | orchestrator | ok: [testbed-manager] 2025-09-08 00:44:56.812151 | orchestrator | 2025-09-08 00:44:56.812162 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:44:56.812173 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:56.812183 | orchestrator | 2025-09-08 00:44:56.812194 | orchestrator | 2025-09-08 00:44:56.812205 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:44:56.812216 | orchestrator | Monday 08 September 2025 00:44:25 +0000 (0:00:00.337) 0:00:47.213 ****** 2025-09-08 00:44:56.812227 | orchestrator | =============================================================================== 2025-09-08 00:44:56.812238 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.58s 2025-09-08 00:44:56.812248 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.20s 2025-09-08 00:44:56.812259 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.53s 2025-09-08 00:44:56.812270 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.01s 2025-09-08 00:44:56.812281 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.96s 2025-09-08 00:44:56.812292 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.82s 2025-09-08 00:44:56.812302 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.72s 2025-09-08 00:44:56.812313 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.63s 2025-09-08 00:44:56.812324 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.45s 2025-09-08 00:44:56.812335 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.34s 2025-09-08 00:44:56.812346 | orchestrator | 2025-09-08 00:44:56.812357 | orchestrator | 2025-09-08 00:44:56.812368 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-08 00:44:56.812378 | orchestrator | 2025-09-08 00:44:56.812389 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-08 00:44:56.812400 | orchestrator | Monday 08 September 2025 00:43:58 +0000 (0:00:00.246) 0:00:00.246 ****** 2025-09-08 00:44:56.812411 | orchestrator | ok: [testbed-manager] 2025-09-08 00:44:56.812422 | orchestrator | 2025-09-08 00:44:56.812433 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-08 00:44:56.812444 | orchestrator | Monday 08 September 2025 00:43:59 +0000 (0:00:01.171) 0:00:01.418 ****** 2025-09-08 00:44:56.812461 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-08 00:44:56.812472 | orchestrator | 2025-09-08 00:44:56.812482 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-08 00:44:56.812493 | orchestrator | Monday 08 September 2025 00:44:00 +0000 (0:00:00.523) 0:00:01.941 ****** 2025-09-08 00:44:56.812504 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.812514 | orchestrator | 2025-09-08 00:44:56.812525 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-08 00:44:56.812536 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:01.137) 0:00:03.079 ****** 2025-09-08 00:44:56.812546 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-08 00:44:56.812557 | orchestrator | ok: [testbed-manager] 2025-09-08 00:44:56.812568 | orchestrator | 2025-09-08 00:44:56.812616 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-08 00:44:56.812627 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:45.499) 0:00:48.578 ****** 2025-09-08 00:44:56.812638 | orchestrator | changed: [testbed-manager] 2025-09-08 00:44:56.812649 | orchestrator | 2025-09-08 00:44:56.812660 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:44:56.812671 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:44:56.812682 | orchestrator | 2025-09-08 00:44:56.812693 | orchestrator | 2025-09-08 00:44:56.812704 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:44:56.812722 | orchestrator | Monday 08 September 2025 00:44:54 +0000 (0:00:07.708) 0:00:56.287 ****** 2025-09-08 00:44:56.812733 | orchestrator | =============================================================================== 2025-09-08 00:44:56.812744 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 45.50s 2025-09-08 00:44:56.812755 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 7.71s 2025-09-08 00:44:56.812766 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.17s 2025-09-08 00:44:56.812776 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.14s 2025-09-08 00:44:56.812787 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.52s 2025-09-08 00:44:56.812798 | orchestrator | 2025-09-08 00:44:56 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:56.812809 | orchestrator | 2025-09-08 00:44:56 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:56.812825 | orchestrator | 2025-09-08 00:44:56 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:56.812836 | orchestrator | 2025-09-08 00:44:56 | INFO  | Task 8af375d7-40a7-46cf-9995-5cba4e715744 is in state SUCCESS 2025-09-08 00:44:56.812847 | orchestrator | 2025-09-08 00:44:56 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:56.812857 | orchestrator | 2025-09-08 00:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:44:59.896897 | orchestrator | 2025-09-08 00:44:59 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:44:59.897006 | orchestrator | 2025-09-08 00:44:59 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:44:59.898715 | orchestrator | 2025-09-08 00:44:59 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:44:59.899414 | orchestrator | 2025-09-08 00:44:59 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:44:59.899436 | orchestrator | 2025-09-08 00:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:02.957835 | orchestrator | 2025-09-08 00:45:02 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:45:02.957965 | orchestrator | 2025-09-08 00:45:02 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:02.960111 | orchestrator | 2025-09-08 00:45:02 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:02.961694 | orchestrator | 2025-09-08 00:45:02 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:02.961866 | orchestrator | 2025-09-08 00:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:06.017359 | orchestrator | 2025-09-08 00:45:06 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state STARTED 2025-09-08 00:45:06.020896 | orchestrator | 2025-09-08 00:45:06 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:06.023208 | orchestrator | 2025-09-08 00:45:06 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:06.028175 | orchestrator | 2025-09-08 00:45:06 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:06.028205 | orchestrator | 2025-09-08 00:45:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:09.140635 | orchestrator | 2025-09-08 00:45:09 | INFO  | Task ff94ef44-b520-487c-b2cc-4c91fda6ec60 is in state SUCCESS 2025-09-08 00:45:09.140728 | orchestrator | 2025-09-08 00:45:09.140744 | orchestrator | 2025-09-08 00:45:09.140756 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:45:09.140768 | orchestrator | 2025-09-08 00:45:09.140779 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:45:09.140790 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:01.107) 0:00:01.107 ****** 2025-09-08 00:45:09.140802 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-08 00:45:09.140813 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-08 00:45:09.140823 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-08 00:45:09.140834 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-08 00:45:09.140845 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-08 00:45:09.140856 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-08 00:45:09.140866 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-08 00:45:09.140877 | orchestrator | 2025-09-08 00:45:09.140888 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-08 00:45:09.140899 | orchestrator | 2025-09-08 00:45:09.140910 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-08 00:45:09.140921 | orchestrator | Monday 08 September 2025 00:43:41 +0000 (0:00:02.299) 0:00:03.407 ****** 2025-09-08 00:45:09.140950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:45:09.140969 | orchestrator | 2025-09-08 00:45:09.140980 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-08 00:45:09.140992 | orchestrator | Monday 08 September 2025 00:43:43 +0000 (0:00:01.495) 0:00:04.902 ****** 2025-09-08 00:45:09.141003 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:09.141015 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:09.141025 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:09.141036 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:09.141047 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:09.141057 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:09.141068 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:09.141079 | orchestrator | 2025-09-08 00:45:09.141090 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-08 00:45:09.141122 | orchestrator | Monday 08 September 2025 00:43:45 +0000 (0:00:02.407) 0:00:07.310 ****** 2025-09-08 00:45:09.141134 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:09.141145 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:09.141155 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:09.141166 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:09.141179 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:09.141192 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:09.141205 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:09.141218 | orchestrator | 2025-09-08 00:45:09.141231 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-08 00:45:09.141246 | orchestrator | Monday 08 September 2025 00:43:48 +0000 (0:00:03.231) 0:00:10.541 ****** 2025-09-08 00:45:09.141259 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:09.141271 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:09.141284 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:09.141298 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:09.141311 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:09.141325 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:09.141338 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:09.141351 | orchestrator | 2025-09-08 00:45:09.141364 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-08 00:45:09.141377 | orchestrator | Monday 08 September 2025 00:43:50 +0000 (0:00:02.059) 0:00:12.601 ****** 2025-09-08 00:45:09.141390 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:09.141403 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:09.141416 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:09.141429 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:09.141443 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:09.141456 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:09.141469 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:09.141481 | orchestrator | 2025-09-08 00:45:09.141494 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-08 00:45:09.141508 | orchestrator | Monday 08 September 2025 00:44:02 +0000 (0:00:11.245) 0:00:23.846 ****** 2025-09-08 00:45:09.141521 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:09.141535 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:09.141546 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:09.141557 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:09.141588 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:09.141600 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:09.141610 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:09.141621 | orchestrator | 2025-09-08 00:45:09.141632 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-08 00:45:09.141678 | orchestrator | Monday 08 September 2025 00:44:45 +0000 (0:00:43.069) 0:01:06.916 ****** 2025-09-08 00:45:09.141691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:45:09.141704 | orchestrator | 2025-09-08 00:45:09.141715 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-08 00:45:09.141726 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:02.019) 0:01:08.935 ****** 2025-09-08 00:45:09.141736 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-08 00:45:09.141748 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-08 00:45:09.141775 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-08 00:45:09.141787 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-08 00:45:09.141798 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-08 00:45:09.141809 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-08 00:45:09.141819 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-08 00:45:09.141839 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-08 00:45:09.141850 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-08 00:45:09.141860 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-08 00:45:09.141871 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-08 00:45:09.141882 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-08 00:45:09.141892 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-08 00:45:09.141903 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-08 00:45:09.141913 | orchestrator | 2025-09-08 00:45:09.141924 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-08 00:45:09.141935 | orchestrator | Monday 08 September 2025 00:44:53 +0000 (0:00:05.875) 0:01:14.810 ****** 2025-09-08 00:45:09.141946 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:09.141957 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:09.141968 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:09.141978 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:09.141989 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:09.142000 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:09.142010 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:09.142077 | orchestrator | 2025-09-08 00:45:09.142089 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-08 00:45:09.142100 | orchestrator | Monday 08 September 2025 00:44:54 +0000 (0:00:01.131) 0:01:15.942 ****** 2025-09-08 00:45:09.142143 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:09.142155 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:09.142166 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:09.142177 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:09.142188 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:09.142199 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:09.142210 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:09.142220 | orchestrator | 2025-09-08 00:45:09.142231 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-08 00:45:09.142242 | orchestrator | Monday 08 September 2025 00:44:56 +0000 (0:00:01.782) 0:01:17.725 ****** 2025-09-08 00:45:09.142253 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:09.142264 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:09.142275 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:09.142286 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:09.142297 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:09.142308 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:09.142318 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:09.142450 | orchestrator | 2025-09-08 00:45:09.142462 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-08 00:45:09.142480 | orchestrator | Monday 08 September 2025 00:44:57 +0000 (0:00:01.524) 0:01:19.249 ****** 2025-09-08 00:45:09.142491 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:45:09.142502 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:45:09.142513 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:45:09.142524 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:45:09.142535 | orchestrator | ok: [testbed-manager] 2025-09-08 00:45:09.142545 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:45:09.142556 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:45:09.142586 | orchestrator | 2025-09-08 00:45:09.142597 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-08 00:45:09.142608 | orchestrator | Monday 08 September 2025 00:44:59 +0000 (0:00:02.141) 0:01:21.391 ****** 2025-09-08 00:45:09.142620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-08 00:45:09.142632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:45:09.142644 | orchestrator | 2025-09-08 00:45:09.142655 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-08 00:45:09.142675 | orchestrator | Monday 08 September 2025 00:45:01 +0000 (0:00:01.445) 0:01:22.836 ****** 2025-09-08 00:45:09.142685 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:09.142696 | orchestrator | 2025-09-08 00:45:09.142707 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-08 00:45:09.142718 | orchestrator | Monday 08 September 2025 00:45:03 +0000 (0:00:02.069) 0:01:24.906 ****** 2025-09-08 00:45:09.142729 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:45:09.142739 | orchestrator | changed: [testbed-manager] 2025-09-08 00:45:09.142750 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:45:09.142761 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:45:09.142771 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:45:09.142782 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:45:09.142793 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:45:09.142803 | orchestrator | 2025-09-08 00:45:09.142814 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:45:09.142826 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:09.142837 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:09.142848 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:09.142871 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:09.142883 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:09.142893 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:09.142904 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:45:09.142915 | orchestrator | 2025-09-08 00:45:09.142926 | orchestrator | 2025-09-08 00:45:09.142937 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:45:09.142948 | orchestrator | Monday 08 September 2025 00:45:06 +0000 (0:00:03.268) 0:01:28.174 ****** 2025-09-08 00:45:09.142959 | orchestrator | =============================================================================== 2025-09-08 00:45:09.142970 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 43.07s 2025-09-08 00:45:09.142980 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.25s 2025-09-08 00:45:09.142991 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.88s 2025-09-08 00:45:09.143002 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.27s 2025-09-08 00:45:09.143012 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.23s 2025-09-08 00:45:09.143023 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.41s 2025-09-08 00:45:09.143034 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.30s 2025-09-08 00:45:09.143045 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.14s 2025-09-08 00:45:09.143056 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.07s 2025-09-08 00:45:09.143068 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.06s 2025-09-08 00:45:09.143082 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.02s 2025-09-08 00:45:09.143095 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.78s 2025-09-08 00:45:09.143108 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.52s 2025-09-08 00:45:09.143128 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.50s 2025-09-08 00:45:09.143141 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.45s 2025-09-08 00:45:09.143160 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.13s 2025-09-08 00:45:09.143174 | orchestrator | 2025-09-08 00:45:09 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:09.144308 | orchestrator | 2025-09-08 00:45:09 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:09.145941 | orchestrator | 2025-09-08 00:45:09 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:09.146065 | orchestrator | 2025-09-08 00:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:12.186440 | orchestrator | 2025-09-08 00:45:12 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:12.187925 | orchestrator | 2025-09-08 00:45:12 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:12.187964 | orchestrator | 2025-09-08 00:45:12 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:12.187978 | orchestrator | 2025-09-08 00:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:15.248646 | orchestrator | 2025-09-08 00:45:15 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:15.249775 | orchestrator | 2025-09-08 00:45:15 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:15.253104 | orchestrator | 2025-09-08 00:45:15 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:15.253469 | orchestrator | 2025-09-08 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:18.297502 | orchestrator | 2025-09-08 00:45:18 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:18.298879 | orchestrator | 2025-09-08 00:45:18 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:18.299426 | orchestrator | 2025-09-08 00:45:18 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:18.299450 | orchestrator | 2025-09-08 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:21.351643 | orchestrator | 2025-09-08 00:45:21 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:21.356968 | orchestrator | 2025-09-08 00:45:21 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:21.359270 | orchestrator | 2025-09-08 00:45:21 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:21.359308 | orchestrator | 2025-09-08 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:24.402968 | orchestrator | 2025-09-08 00:45:24 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:24.406396 | orchestrator | 2025-09-08 00:45:24 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:24.406432 | orchestrator | 2025-09-08 00:45:24 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:24.406446 | orchestrator | 2025-09-08 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:27.469763 | orchestrator | 2025-09-08 00:45:27 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:27.473532 | orchestrator | 2025-09-08 00:45:27 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:27.478069 | orchestrator | 2025-09-08 00:45:27 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:27.481707 | orchestrator | 2025-09-08 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:30.551322 | orchestrator | 2025-09-08 00:45:30 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:30.554774 | orchestrator | 2025-09-08 00:45:30 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:30.557363 | orchestrator | 2025-09-08 00:45:30 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:30.557388 | orchestrator | 2025-09-08 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:33.604338 | orchestrator | 2025-09-08 00:45:33 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:33.604905 | orchestrator | 2025-09-08 00:45:33 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:33.606250 | orchestrator | 2025-09-08 00:45:33 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:33.606601 | orchestrator | 2025-09-08 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:36.656718 | orchestrator | 2025-09-08 00:45:36 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:36.656830 | orchestrator | 2025-09-08 00:45:36 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:36.657952 | orchestrator | 2025-09-08 00:45:36 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:36.658132 | orchestrator | 2025-09-08 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:39.709625 | orchestrator | 2025-09-08 00:45:39 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:39.711836 | orchestrator | 2025-09-08 00:45:39 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:39.713312 | orchestrator | 2025-09-08 00:45:39 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:39.713343 | orchestrator | 2025-09-08 00:45:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:42.767410 | orchestrator | 2025-09-08 00:45:42 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:42.768491 | orchestrator | 2025-09-08 00:45:42 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:42.770412 | orchestrator | 2025-09-08 00:45:42 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:42.770520 | orchestrator | 2025-09-08 00:45:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:45.820886 | orchestrator | 2025-09-08 00:45:45 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:45.823223 | orchestrator | 2025-09-08 00:45:45 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:45.828312 | orchestrator | 2025-09-08 00:45:45 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:45.828658 | orchestrator | 2025-09-08 00:45:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:48.881644 | orchestrator | 2025-09-08 00:45:48 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:48.882539 | orchestrator | 2025-09-08 00:45:48 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:48.883826 | orchestrator | 2025-09-08 00:45:48 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:48.883897 | orchestrator | 2025-09-08 00:45:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:51.928696 | orchestrator | 2025-09-08 00:45:51 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:51.928973 | orchestrator | 2025-09-08 00:45:51 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:51.929645 | orchestrator | 2025-09-08 00:45:51 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:51.929911 | orchestrator | 2025-09-08 00:45:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:54.977751 | orchestrator | 2025-09-08 00:45:54 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:54.977865 | orchestrator | 2025-09-08 00:45:54 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:54.980273 | orchestrator | 2025-09-08 00:45:54 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:54.983260 | orchestrator | 2025-09-08 00:45:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:45:58.026394 | orchestrator | 2025-09-08 00:45:58 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:45:58.028496 | orchestrator | 2025-09-08 00:45:58 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:45:58.028528 | orchestrator | 2025-09-08 00:45:58 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:45:58.030593 | orchestrator | 2025-09-08 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:01.071265 | orchestrator | 2025-09-08 00:46:01 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:01.073874 | orchestrator | 2025-09-08 00:46:01 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:01.074805 | orchestrator | 2025-09-08 00:46:01 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:46:01.074870 | orchestrator | 2025-09-08 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:04.118333 | orchestrator | 2025-09-08 00:46:04 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:04.119226 | orchestrator | 2025-09-08 00:46:04 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:04.120683 | orchestrator | 2025-09-08 00:46:04 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state STARTED 2025-09-08 00:46:04.120707 | orchestrator | 2025-09-08 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:07.156923 | orchestrator | 2025-09-08 00:46:07 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:07.157564 | orchestrator | 2025-09-08 00:46:07 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:07.158356 | orchestrator | 2025-09-08 00:46:07 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:07.159275 | orchestrator | 2025-09-08 00:46:07 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state STARTED 2025-09-08 00:46:07.161030 | orchestrator | 2025-09-08 00:46:07 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:07.167685 | orchestrator | 2025-09-08 00:46:07 | INFO  | Task 46c30903-1f7d-4b92-b000-cd19ae38b060 is in state SUCCESS 2025-09-08 00:46:07.170287 | orchestrator | 2025-09-08 00:46:07.170402 | orchestrator | 2025-09-08 00:46:07.170419 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-08 00:46:07.170472 | orchestrator | 2025-09-08 00:46:07.170485 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-08 00:46:07.170496 | orchestrator | Monday 08 September 2025 00:43:30 +0000 (0:00:00.307) 0:00:00.307 ****** 2025-09-08 00:46:07.170508 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:46:07.170521 | orchestrator | 2025-09-08 00:46:07.170532 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-08 00:46:07.170543 | orchestrator | Monday 08 September 2025 00:43:31 +0000 (0:00:01.493) 0:00:01.801 ****** 2025-09-08 00:46:07.170554 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:07.170565 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:07.170609 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:07.170621 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:07.170632 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:07.170643 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:07.170654 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:07.170666 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:07.170677 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:07.170688 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:07.170699 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:07.170710 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:07.170721 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:07.170732 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-08 00:46:07.170742 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:07.170753 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:07.170764 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:07.170775 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-08 00:46:07.170786 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:07.170797 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:07.170808 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-08 00:46:07.170818 | orchestrator | 2025-09-08 00:46:07.170829 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-08 00:46:07.170840 | orchestrator | Monday 08 September 2025 00:43:37 +0000 (0:00:05.313) 0:00:07.115 ****** 2025-09-08 00:46:07.170851 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:46:07.170864 | orchestrator | 2025-09-08 00:46:07.170894 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-08 00:46:07.170905 | orchestrator | Monday 08 September 2025 00:43:38 +0000 (0:00:01.283) 0:00:08.398 ****** 2025-09-08 00:46:07.170922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.170950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.170981 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.170994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.171006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.171018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171042 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.171061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.171192 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171406 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.171417 | orchestrator | 2025-09-08 00:46:07.171429 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-08 00:46:07.171441 | orchestrator | Monday 08 September 2025 00:43:42 +0000 (0:00:04.521) 0:00:12.919 ****** 2025-09-08 00:46:07.171452 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171484 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171497 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171647 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:07.171664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171712 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:07.171723 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:07.171734 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:07.171745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171787 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:07.171799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171838 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:07.171865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171901 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:07.171912 | orchestrator | 2025-09-08 00:46:07.171923 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-08 00:46:07.171934 | orchestrator | Monday 08 September 2025 00:43:44 +0000 (0:00:01.494) 0:00:14.413 ****** 2025-09-08 00:46:07.171946 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.171982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.171994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172012 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172025 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172036 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:07.172047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.172059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172091 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:07.172103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.172119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172131 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:07.172142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.172177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172206 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:07.172217 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:07.172229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.172240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172269 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:07.172280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-08 00:46:07.172298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.172322 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:07.172333 | orchestrator | 2025-09-08 00:46:07.172344 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-08 00:46:07.172361 | orchestrator | Monday 08 September 2025 00:43:47 +0000 (0:00:03.156) 0:00:17.570 ****** 2025-09-08 00:46:07.172372 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:07.172383 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:07.172394 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:07.172405 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:07.172416 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:07.172427 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:07.172438 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:07.172449 | orchestrator | 2025-09-08 00:46:07.172460 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-08 00:46:07.172471 | orchestrator | Monday 08 September 2025 00:43:48 +0000 (0:00:01.245) 0:00:18.816 ****** 2025-09-08 00:46:07.172482 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:46:07.172493 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:46:07.172504 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:46:07.172515 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:46:07.172525 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:46:07.172536 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:46:07.172547 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:46:07.172558 | orchestrator | 2025-09-08 00:46:07.172569 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-08 00:46:07.172596 | orchestrator | Monday 08 September 2025 00:43:50 +0000 (0:00:01.538) 0:00:20.355 ****** 2025-09-08 00:46:07.172608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.172620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.172632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.172644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.172667 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.172686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.172715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.172727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172755 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172896 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.172919 | orchestrator | 2025-09-08 00:46:07.172931 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-08 00:46:07.172942 | orchestrator | Monday 08 September 2025 00:43:58 +0000 (0:00:07.783) 0:00:28.139 ****** 2025-09-08 00:46:07.172953 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:07.172965 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-08 00:46:07.172976 | orchestrator | to this access issue: 2025-09-08 00:46:07.172987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-08 00:46:07.172999 | orchestrator | directory 2025-09-08 00:46:07.173009 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:07.173020 | orchestrator | 2025-09-08 00:46:07.173031 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-08 00:46:07.173042 | orchestrator | Monday 08 September 2025 00:43:59 +0000 (0:00:01.647) 0:00:29.786 ****** 2025-09-08 00:46:07.173053 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:07.173064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-08 00:46:07.173075 | orchestrator | to this access issue: 2025-09-08 00:46:07.173086 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-08 00:46:07.173096 | orchestrator | directory 2025-09-08 00:46:07.173107 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:07.173118 | orchestrator | 2025-09-08 00:46:07.173129 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-08 00:46:07.173140 | orchestrator | Monday 08 September 2025 00:44:00 +0000 (0:00:01.145) 0:00:30.932 ****** 2025-09-08 00:46:07.173151 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:07.173162 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-08 00:46:07.173172 | orchestrator | to this access issue: 2025-09-08 00:46:07.173183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-08 00:46:07.173194 | orchestrator | directory 2025-09-08 00:46:07.173205 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:07.173216 | orchestrator | 2025-09-08 00:46:07.173227 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-08 00:46:07.173238 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:00.934) 0:00:31.867 ****** 2025-09-08 00:46:07.173249 | orchestrator | [WARNING]: Skipped 2025-09-08 00:46:07.173260 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-08 00:46:07.173270 | orchestrator | to this access issue: 2025-09-08 00:46:07.173281 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-08 00:46:07.173292 | orchestrator | directory 2025-09-08 00:46:07.173303 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 00:46:07.173314 | orchestrator | 2025-09-08 00:46:07.173331 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-08 00:46:07.173342 | orchestrator | Monday 08 September 2025 00:44:02 +0000 (0:00:00.799) 0:00:32.667 ****** 2025-09-08 00:46:07.173353 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:07.173364 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:07.173375 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:07.173386 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:07.173396 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:07.173407 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:07.173418 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:07.173429 | orchestrator | 2025-09-08 00:46:07.173444 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-08 00:46:07.173455 | orchestrator | Monday 08 September 2025 00:44:06 +0000 (0:00:03.834) 0:00:36.501 ****** 2025-09-08 00:46:07.173466 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:07.173477 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:07.173488 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:07.173499 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:07.173510 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:07.173521 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:07.173532 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-08 00:46:07.173543 | orchestrator | 2025-09-08 00:46:07.173554 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-08 00:46:07.173565 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:03.161) 0:00:39.663 ****** 2025-09-08 00:46:07.173606 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:07.173618 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:07.173628 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:07.173639 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:07.173658 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:07.173669 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:07.173680 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:07.173691 | orchestrator | 2025-09-08 00:46:07.173702 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-08 00:46:07.173713 | orchestrator | Monday 08 September 2025 00:44:12 +0000 (0:00:03.340) 0:00:43.003 ****** 2025-09-08 00:46:07.173725 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.173737 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.173755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.173768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.173784 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.173796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.173820 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.173846 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.173858 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.173870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.173887 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.173899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.173915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.173927 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.173947 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.173959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.173971 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.173989 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:46:07.174012 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174065 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174080 | orchestrator | 2025-09-08 00:46:07.174091 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-08 00:46:07.174102 | orchestrator | Monday 08 September 2025 00:44:15 +0000 (0:00:02.958) 0:00:45.962 ****** 2025-09-08 00:46:07.174114 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:07.174125 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:07.174136 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:07.174147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:07.174158 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:07.174169 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:07.174180 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-08 00:46:07.174191 | orchestrator | 2025-09-08 00:46:07.174210 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-08 00:46:07.174221 | orchestrator | Monday 08 September 2025 00:44:18 +0000 (0:00:02.370) 0:00:48.332 ****** 2025-09-08 00:46:07.174232 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:07.174244 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:07.174255 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:07.174274 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:07.174285 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:07.174296 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:07.174307 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-08 00:46:07.174318 | orchestrator | 2025-09-08 00:46:07.174329 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-08 00:46:07.174340 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:02.787) 0:00:51.119 ****** 2025-09-08 00:46:07.174351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174375 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174422 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-08 00:46:07.174527 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174656 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:46:07.174674 | orchestrator | 2025-09-08 00:46:07.174685 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-08 00:46:07.174697 | orchestrator | Monday 08 September 2025 00:44:24 +0000 (0:00:03.541) 0:00:54.661 ****** 2025-09-08 00:46:07.174714 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:07.174725 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:07.174736 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:07.174747 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:07.174758 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:07.174769 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:07.174780 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:07.174791 | orchestrator | 2025-09-08 00:46:07.174802 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-08 00:46:07.174813 | orchestrator | Monday 08 September 2025 00:44:26 +0000 (0:00:01.609) 0:00:56.271 ****** 2025-09-08 00:46:07.174824 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:07.174835 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:07.174846 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:07.174857 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:07.174868 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:07.174879 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:07.174890 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:07.174901 | orchestrator | 2025-09-08 00:46:07.174912 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:07.174923 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:01.038) 0:00:57.309 ****** 2025-09-08 00:46:07.174934 | orchestrator | 2025-09-08 00:46:07.174945 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:07.174956 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.068) 0:00:57.378 ****** 2025-09-08 00:46:07.174967 | orchestrator | 2025-09-08 00:46:07.174978 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:07.174989 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.060) 0:00:57.438 ****** 2025-09-08 00:46:07.174999 | orchestrator | 2025-09-08 00:46:07.175011 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:07.175022 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.182) 0:00:57.621 ****** 2025-09-08 00:46:07.175033 | orchestrator | 2025-09-08 00:46:07.175043 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:07.175054 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.081) 0:00:57.702 ****** 2025-09-08 00:46:07.175065 | orchestrator | 2025-09-08 00:46:07.175076 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:07.175087 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.072) 0:00:57.774 ****** 2025-09-08 00:46:07.175098 | orchestrator | 2025-09-08 00:46:07.175109 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-08 00:46:07.175120 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.064) 0:00:57.839 ****** 2025-09-08 00:46:07.175131 | orchestrator | 2025-09-08 00:46:07.175142 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-08 00:46:07.175153 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:00.080) 0:00:57.919 ****** 2025-09-08 00:46:07.175164 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:07.175176 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:07.175187 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:07.175197 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:07.175208 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:07.175219 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:07.175230 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:07.175247 | orchestrator | 2025-09-08 00:46:07.175258 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-08 00:46:07.175270 | orchestrator | Monday 08 September 2025 00:45:06 +0000 (0:00:38.176) 0:01:36.096 ****** 2025-09-08 00:46:07.175281 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:07.175292 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:07.175303 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:07.175313 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:07.175324 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:07.175335 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:07.175346 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:07.175357 | orchestrator | 2025-09-08 00:46:07.175368 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-08 00:46:07.175379 | orchestrator | Monday 08 September 2025 00:45:51 +0000 (0:00:45.343) 0:02:21.440 ****** 2025-09-08 00:46:07.175390 | orchestrator | ok: [testbed-manager] 2025-09-08 00:46:07.175406 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:46:07.175417 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:46:07.175428 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:46:07.175439 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:46:07.175450 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:46:07.175461 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:46:07.175472 | orchestrator | 2025-09-08 00:46:07.175483 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-08 00:46:07.175494 | orchestrator | Monday 08 September 2025 00:45:53 +0000 (0:00:02.388) 0:02:23.828 ****** 2025-09-08 00:46:07.175505 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:07.175516 | orchestrator | changed: [testbed-manager] 2025-09-08 00:46:07.175527 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:46:07.175538 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:46:07.175548 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:07.175559 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:07.175586 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:46:07.175598 | orchestrator | 2025-09-08 00:46:07.175609 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:46:07.175621 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:07.175633 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:07.175644 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:07.175661 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:07.175673 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:07.175684 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:07.175695 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-08 00:46:07.175706 | orchestrator | 2025-09-08 00:46:07.175717 | orchestrator | 2025-09-08 00:46:07.175728 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:46:07.175739 | orchestrator | Monday 08 September 2025 00:46:04 +0000 (0:00:10.512) 0:02:34.340 ****** 2025-09-08 00:46:07.175750 | orchestrator | =============================================================================== 2025-09-08 00:46:07.175761 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 45.34s 2025-09-08 00:46:07.175779 | orchestrator | common : Restart fluentd container ------------------------------------- 38.18s 2025-09-08 00:46:07.175790 | orchestrator | common : Restart cron container ---------------------------------------- 10.51s 2025-09-08 00:46:07.175801 | orchestrator | common : Copying over config.json files for services -------------------- 7.78s 2025-09-08 00:46:07.175812 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.31s 2025-09-08 00:46:07.175823 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.52s 2025-09-08 00:46:07.175834 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.83s 2025-09-08 00:46:07.175845 | orchestrator | common : Check common containers ---------------------------------------- 3.54s 2025-09-08 00:46:07.175855 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.34s 2025-09-08 00:46:07.175866 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.16s 2025-09-08 00:46:07.175877 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.16s 2025-09-08 00:46:07.175888 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.96s 2025-09-08 00:46:07.175899 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.79s 2025-09-08 00:46:07.175910 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.39s 2025-09-08 00:46:07.175921 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.37s 2025-09-08 00:46:07.175931 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.65s 2025-09-08 00:46:07.175942 | orchestrator | common : Creating log volume -------------------------------------------- 1.61s 2025-09-08 00:46:07.175953 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.54s 2025-09-08 00:46:07.175964 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.49s 2025-09-08 00:46:07.175975 | orchestrator | common : include_tasks -------------------------------------------------- 1.49s 2025-09-08 00:46:07.175986 | orchestrator | 2025-09-08 00:46:07 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:07.175997 | orchestrator | 2025-09-08 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:10.202996 | orchestrator | 2025-09-08 00:46:10 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:10.203418 | orchestrator | 2025-09-08 00:46:10 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:10.205243 | orchestrator | 2025-09-08 00:46:10 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:10.205265 | orchestrator | 2025-09-08 00:46:10 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state STARTED 2025-09-08 00:46:10.205276 | orchestrator | 2025-09-08 00:46:10 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:10.205796 | orchestrator | 2025-09-08 00:46:10 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:10.205816 | orchestrator | 2025-09-08 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:13.234499 | orchestrator | 2025-09-08 00:46:13 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:13.235729 | orchestrator | 2025-09-08 00:46:13 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:13.237431 | orchestrator | 2025-09-08 00:46:13 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:13.238459 | orchestrator | 2025-09-08 00:46:13 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state STARTED 2025-09-08 00:46:13.239761 | orchestrator | 2025-09-08 00:46:13 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:13.241445 | orchestrator | 2025-09-08 00:46:13 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:13.241469 | orchestrator | 2025-09-08 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:16.282203 | orchestrator | 2025-09-08 00:46:16 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:16.283095 | orchestrator | 2025-09-08 00:46:16 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:16.283737 | orchestrator | 2025-09-08 00:46:16 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:16.283877 | orchestrator | 2025-09-08 00:46:16 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state STARTED 2025-09-08 00:46:16.284554 | orchestrator | 2025-09-08 00:46:16 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:16.291230 | orchestrator | 2025-09-08 00:46:16 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:16.291255 | orchestrator | 2025-09-08 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:19.387776 | orchestrator | 2025-09-08 00:46:19 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:19.388846 | orchestrator | 2025-09-08 00:46:19 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:19.389761 | orchestrator | 2025-09-08 00:46:19 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:19.390705 | orchestrator | 2025-09-08 00:46:19 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state STARTED 2025-09-08 00:46:19.391753 | orchestrator | 2025-09-08 00:46:19 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:19.392795 | orchestrator | 2025-09-08 00:46:19 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:19.392983 | orchestrator | 2025-09-08 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:22.481279 | orchestrator | 2025-09-08 00:46:22 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:22.483099 | orchestrator | 2025-09-08 00:46:22 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:22.485246 | orchestrator | 2025-09-08 00:46:22 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:22.485268 | orchestrator | 2025-09-08 00:46:22 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state STARTED 2025-09-08 00:46:22.485941 | orchestrator | 2025-09-08 00:46:22 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:22.488888 | orchestrator | 2025-09-08 00:46:22 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:22.489043 | orchestrator | 2025-09-08 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:25.552075 | orchestrator | 2025-09-08 00:46:25 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:25.557162 | orchestrator | 2025-09-08 00:46:25 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:25.558183 | orchestrator | 2025-09-08 00:46:25 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:25.559104 | orchestrator | 2025-09-08 00:46:25 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state STARTED 2025-09-08 00:46:25.560215 | orchestrator | 2025-09-08 00:46:25 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:25.561057 | orchestrator | 2025-09-08 00:46:25 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:25.561184 | orchestrator | 2025-09-08 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:28.603729 | orchestrator | 2025-09-08 00:46:28 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:28.603951 | orchestrator | 2025-09-08 00:46:28 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:28.605155 | orchestrator | 2025-09-08 00:46:28 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:28.607347 | orchestrator | 2025-09-08 00:46:28 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state STARTED 2025-09-08 00:46:28.609521 | orchestrator | 2025-09-08 00:46:28 | INFO  | Task 64cca344-0220-44e8-b3ae-7397b8de609e is in state SUCCESS 2025-09-08 00:46:28.612853 | orchestrator | 2025-09-08 00:46:28 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:28.615086 | orchestrator | 2025-09-08 00:46:28 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:28.615109 | orchestrator | 2025-09-08 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:31.669966 | orchestrator | 2025-09-08 00:46:31.670123 | orchestrator | 2025-09-08 00:46:31.670140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:46:31.670153 | orchestrator | 2025-09-08 00:46:31.670164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:46:31.670176 | orchestrator | Monday 08 September 2025 00:46:09 +0000 (0:00:00.318) 0:00:00.318 ****** 2025-09-08 00:46:31.670187 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:46:31.670199 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:46:31.670210 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:46:31.670221 | orchestrator | 2025-09-08 00:46:31.670232 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:46:31.670243 | orchestrator | Monday 08 September 2025 00:46:09 +0000 (0:00:00.533) 0:00:00.851 ****** 2025-09-08 00:46:31.670255 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-08 00:46:31.670266 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-08 00:46:31.670277 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-08 00:46:31.670288 | orchestrator | 2025-09-08 00:46:31.670298 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-08 00:46:31.670309 | orchestrator | 2025-09-08 00:46:31.670320 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-08 00:46:31.670331 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:00.469) 0:00:01.320 ****** 2025-09-08 00:46:31.670343 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:46:31.670354 | orchestrator | 2025-09-08 00:46:31.670365 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-08 00:46:31.670376 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:00.499) 0:00:01.820 ****** 2025-09-08 00:46:31.670387 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-08 00:46:31.670398 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-08 00:46:31.670409 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-08 00:46:31.670420 | orchestrator | 2025-09-08 00:46:31.670430 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-08 00:46:31.670441 | orchestrator | Monday 08 September 2025 00:46:11 +0000 (0:00:00.777) 0:00:02.598 ****** 2025-09-08 00:46:31.670452 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-08 00:46:31.670463 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-08 00:46:31.670474 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-08 00:46:31.670514 | orchestrator | 2025-09-08 00:46:31.670525 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-08 00:46:31.670536 | orchestrator | Monday 08 September 2025 00:46:13 +0000 (0:00:01.683) 0:00:04.281 ****** 2025-09-08 00:46:31.670547 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:31.670558 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:31.670569 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:31.670579 | orchestrator | 2025-09-08 00:46:31.670618 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-08 00:46:31.670629 | orchestrator | Monday 08 September 2025 00:46:14 +0000 (0:00:01.646) 0:00:05.928 ****** 2025-09-08 00:46:31.670640 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:31.670651 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:31.670661 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:31.670672 | orchestrator | 2025-09-08 00:46:31.670683 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:46:31.670694 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:31.670724 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:31.670736 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:31.670748 | orchestrator | 2025-09-08 00:46:31.670759 | orchestrator | 2025-09-08 00:46:31.670770 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:46:31.670780 | orchestrator | Monday 08 September 2025 00:46:23 +0000 (0:00:08.963) 0:00:14.891 ****** 2025-09-08 00:46:31.670791 | orchestrator | =============================================================================== 2025-09-08 00:46:31.670802 | orchestrator | memcached : Restart memcached container --------------------------------- 8.96s 2025-09-08 00:46:31.670813 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.68s 2025-09-08 00:46:31.670824 | orchestrator | memcached : Check memcached container ----------------------------------- 1.65s 2025-09-08 00:46:31.670835 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.78s 2025-09-08 00:46:31.670845 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-09-08 00:46:31.670856 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2025-09-08 00:46:31.670867 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-09-08 00:46:31.670877 | orchestrator | 2025-09-08 00:46:31.670888 | orchestrator | 2025-09-08 00:46:31.670899 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:46:31.670910 | orchestrator | 2025-09-08 00:46:31.670921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:46:31.670931 | orchestrator | Monday 08 September 2025 00:46:09 +0000 (0:00:00.324) 0:00:00.324 ****** 2025-09-08 00:46:31.670942 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:46:31.670953 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:46:31.670964 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:46:31.670975 | orchestrator | 2025-09-08 00:46:31.670986 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:46:31.671014 | orchestrator | Monday 08 September 2025 00:46:09 +0000 (0:00:00.424) 0:00:00.749 ****** 2025-09-08 00:46:31.671026 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-08 00:46:31.671037 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-08 00:46:31.671047 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-08 00:46:31.671058 | orchestrator | 2025-09-08 00:46:31.671069 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-08 00:46:31.671080 | orchestrator | 2025-09-08 00:46:31.671090 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-08 00:46:31.671110 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:00.546) 0:00:01.296 ****** 2025-09-08 00:46:31.671122 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:46:31.671133 | orchestrator | 2025-09-08 00:46:31.671144 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-08 00:46:31.671154 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:00.564) 0:00:01.860 ****** 2025-09-08 00:46:31.671169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671272 | orchestrator | 2025-09-08 00:46:31.671284 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-08 00:46:31.671295 | orchestrator | Monday 08 September 2025 00:46:11 +0000 (0:00:01.123) 0:00:02.984 ****** 2025-09-08 00:46:31.671307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671401 | orchestrator | 2025-09-08 00:46:31.671413 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-08 00:46:31.671424 | orchestrator | Monday 08 September 2025 00:46:14 +0000 (0:00:02.493) 0:00:05.477 ****** 2025-09-08 00:46:31.671436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671530 | orchestrator | 2025-09-08 00:46:31.671541 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-08 00:46:31.671552 | orchestrator | Monday 08 September 2025 00:46:17 +0000 (0:00:03.182) 0:00:08.659 ****** 2025-09-08 00:46:31.671564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-08 00:46:31.671668 | orchestrator | 2025-09-08 00:46:31.671679 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-08 00:46:31.671691 | orchestrator | Monday 08 September 2025 00:46:19 +0000 (0:00:02.182) 0:00:10.842 ****** 2025-09-08 00:46:31.671702 | orchestrator | 2025-09-08 00:46:31.671713 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-08 00:46:31.671724 | orchestrator | Monday 08 September 2025 00:46:19 +0000 (0:00:00.175) 0:00:11.017 ****** 2025-09-08 00:46:31.671735 | orchestrator | 2025-09-08 00:46:31.671746 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-08 00:46:31.671757 | orchestrator | Monday 08 September 2025 00:46:20 +0000 (0:00:00.147) 0:00:11.165 ****** 2025-09-08 00:46:31.671768 | orchestrator | 2025-09-08 00:46:31.671778 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-08 00:46:31.671789 | orchestrator | Monday 08 September 2025 00:46:20 +0000 (0:00:00.209) 0:00:11.375 ****** 2025-09-08 00:46:31.671800 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:31.671811 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:31.671822 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:31.671833 | orchestrator | 2025-09-08 00:46:31.671844 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-08 00:46:31.671855 | orchestrator | Monday 08 September 2025 00:46:24 +0000 (0:00:03.959) 0:00:15.334 ****** 2025-09-08 00:46:31.671866 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:46:31.671877 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:46:31.671888 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:46:31.671899 | orchestrator | 2025-09-08 00:46:31.671910 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:46:31.671921 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:31.671933 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:31.671944 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:46:31.671954 | orchestrator | 2025-09-08 00:46:31.671965 | orchestrator | 2025-09-08 00:46:31.671976 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:46:31.671987 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:04.953) 0:00:20.287 ****** 2025-09-08 00:46:31.671998 | orchestrator | =============================================================================== 2025-09-08 00:46:31.672009 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.95s 2025-09-08 00:46:31.672020 | orchestrator | redis : Restart redis container ----------------------------------------- 3.96s 2025-09-08 00:46:31.672031 | orchestrator | redis : Copying over redis config files --------------------------------- 3.18s 2025-09-08 00:46:31.672042 | orchestrator | redis : Copying over default config.json files -------------------------- 2.49s 2025-09-08 00:46:31.672053 | orchestrator | redis : Check redis containers ------------------------------------------ 2.18s 2025-09-08 00:46:31.672072 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.12s 2025-09-08 00:46:31.672083 | orchestrator | redis : include_tasks --------------------------------------------------- 0.56s 2025-09-08 00:46:31.672094 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-09-08 00:46:31.672105 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.53s 2025-09-08 00:46:31.672116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-09-08 00:46:31.672127 | orchestrator | 2025-09-08 00:46:31 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:31.672138 | orchestrator | 2025-09-08 00:46:31 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:31.672149 | orchestrator | 2025-09-08 00:46:31 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:31.672160 | orchestrator | 2025-09-08 00:46:31 | INFO  | Task 9549b565-96d3-4902-9161-7b5aa48e0a9b is in state SUCCESS 2025-09-08 00:46:31.672171 | orchestrator | 2025-09-08 00:46:31 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:31.672182 | orchestrator | 2025-09-08 00:46:31 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:31.672193 | orchestrator | 2025-09-08 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:34.697475 | orchestrator | 2025-09-08 00:46:34 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:34.697823 | orchestrator | 2025-09-08 00:46:34 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:34.699775 | orchestrator | 2025-09-08 00:46:34 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:34.702143 | orchestrator | 2025-09-08 00:46:34 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:34.705397 | orchestrator | 2025-09-08 00:46:34 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:34.706327 | orchestrator | 2025-09-08 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:37.749843 | orchestrator | 2025-09-08 00:46:37 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:37.749960 | orchestrator | 2025-09-08 00:46:37 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:37.750005 | orchestrator | 2025-09-08 00:46:37 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:37.750095 | orchestrator | 2025-09-08 00:46:37 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:37.750110 | orchestrator | 2025-09-08 00:46:37 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:37.750122 | orchestrator | 2025-09-08 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:40.811684 | orchestrator | 2025-09-08 00:46:40 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:40.815344 | orchestrator | 2025-09-08 00:46:40 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:40.817959 | orchestrator | 2025-09-08 00:46:40 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:40.820200 | orchestrator | 2025-09-08 00:46:40 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:40.822635 | orchestrator | 2025-09-08 00:46:40 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:40.823394 | orchestrator | 2025-09-08 00:46:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:43.882755 | orchestrator | 2025-09-08 00:46:43 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:43.882862 | orchestrator | 2025-09-08 00:46:43 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:43.882878 | orchestrator | 2025-09-08 00:46:43 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:43.883409 | orchestrator | 2025-09-08 00:46:43 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:43.883436 | orchestrator | 2025-09-08 00:46:43 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:43.883450 | orchestrator | 2025-09-08 00:46:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:46.931110 | orchestrator | 2025-09-08 00:46:46 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:46.931293 | orchestrator | 2025-09-08 00:46:46 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:46.931326 | orchestrator | 2025-09-08 00:46:46 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:46.931339 | orchestrator | 2025-09-08 00:46:46 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:46.931364 | orchestrator | 2025-09-08 00:46:46 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:46.931376 | orchestrator | 2025-09-08 00:46:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:49.999770 | orchestrator | 2025-09-08 00:46:49 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:49.999851 | orchestrator | 2025-09-08 00:46:49 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:49.999859 | orchestrator | 2025-09-08 00:46:49 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:49.999867 | orchestrator | 2025-09-08 00:46:49 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:49.999873 | orchestrator | 2025-09-08 00:46:49 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:49.999880 | orchestrator | 2025-09-08 00:46:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:53.010482 | orchestrator | 2025-09-08 00:46:53 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:53.011101 | orchestrator | 2025-09-08 00:46:53 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:53.016876 | orchestrator | 2025-09-08 00:46:53 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:53.018133 | orchestrator | 2025-09-08 00:46:53 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:53.026171 | orchestrator | 2025-09-08 00:46:53 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:53.026258 | orchestrator | 2025-09-08 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:56.052332 | orchestrator | 2025-09-08 00:46:56 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:56.052575 | orchestrator | 2025-09-08 00:46:56 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:56.054263 | orchestrator | 2025-09-08 00:46:56 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:56.054730 | orchestrator | 2025-09-08 00:46:56 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:56.055310 | orchestrator | 2025-09-08 00:46:56 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:56.055332 | orchestrator | 2025-09-08 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:46:59.096845 | orchestrator | 2025-09-08 00:46:59 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:46:59.096959 | orchestrator | 2025-09-08 00:46:59 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:46:59.097693 | orchestrator | 2025-09-08 00:46:59 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:46:59.098872 | orchestrator | 2025-09-08 00:46:59 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:46:59.100042 | orchestrator | 2025-09-08 00:46:59 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:46:59.100063 | orchestrator | 2025-09-08 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:02.146310 | orchestrator | 2025-09-08 00:47:02 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:02.153473 | orchestrator | 2025-09-08 00:47:02 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:47:02.154075 | orchestrator | 2025-09-08 00:47:02 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:02.160191 | orchestrator | 2025-09-08 00:47:02 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:47:02.168353 | orchestrator | 2025-09-08 00:47:02 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:02.168386 | orchestrator | 2025-09-08 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:05.215905 | orchestrator | 2025-09-08 00:47:05 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:05.216009 | orchestrator | 2025-09-08 00:47:05 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:47:05.217640 | orchestrator | 2025-09-08 00:47:05 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:05.219461 | orchestrator | 2025-09-08 00:47:05 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:47:05.220218 | orchestrator | 2025-09-08 00:47:05 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:05.220240 | orchestrator | 2025-09-08 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:08.266077 | orchestrator | 2025-09-08 00:47:08 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:08.268886 | orchestrator | 2025-09-08 00:47:08 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:47:08.269576 | orchestrator | 2025-09-08 00:47:08 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:08.270380 | orchestrator | 2025-09-08 00:47:08 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:47:08.271548 | orchestrator | 2025-09-08 00:47:08 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:08.271680 | orchestrator | 2025-09-08 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:11.467014 | orchestrator | 2025-09-08 00:47:11 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:11.467800 | orchestrator | 2025-09-08 00:47:11 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:47:11.467858 | orchestrator | 2025-09-08 00:47:11 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:11.467873 | orchestrator | 2025-09-08 00:47:11 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:47:11.467886 | orchestrator | 2025-09-08 00:47:11 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:11.467900 | orchestrator | 2025-09-08 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:14.517687 | orchestrator | 2025-09-08 00:47:14 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:14.521936 | orchestrator | 2025-09-08 00:47:14 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:47:14.531472 | orchestrator | 2025-09-08 00:47:14 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:14.532949 | orchestrator | 2025-09-08 00:47:14 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:47:14.535501 | orchestrator | 2025-09-08 00:47:14 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:14.535529 | orchestrator | 2025-09-08 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:17.981949 | orchestrator | 2025-09-08 00:47:17 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:17.984100 | orchestrator | 2025-09-08 00:47:17 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:47:17.984136 | orchestrator | 2025-09-08 00:47:17 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:17.985156 | orchestrator | 2025-09-08 00:47:17 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:47:17.986663 | orchestrator | 2025-09-08 00:47:17 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:17.986859 | orchestrator | 2025-09-08 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:21.050939 | orchestrator | 2025-09-08 00:47:21 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:21.051038 | orchestrator | 2025-09-08 00:47:21 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state STARTED 2025-09-08 00:47:21.053882 | orchestrator | 2025-09-08 00:47:21 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:21.055836 | orchestrator | 2025-09-08 00:47:21 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state STARTED 2025-09-08 00:47:21.056734 | orchestrator | 2025-09-08 00:47:21 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:21.056764 | orchestrator | 2025-09-08 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:24.105497 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task fd649cfd-a01e-42c5-aeb3-66d13ad0bfbc is in state STARTED 2025-09-08 00:47:24.105797 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:24.106858 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task c405f1db-9676-4343-b450-7ed33365fbcd is in state SUCCESS 2025-09-08 00:47:24.110238 | orchestrator | 2025-09-08 00:47:24.111963 | orchestrator | 2025-09-08 00:47:24.111997 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-08 00:47:24.112010 | orchestrator | 2025-09-08 00:47:24.112023 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-08 00:47:24.112036 | orchestrator | Monday 08 September 2025 00:43:31 +0000 (0:00:00.222) 0:00:00.222 ****** 2025-09-08 00:47:24.112073 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.112087 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.112098 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.112110 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.112121 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.112132 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.112143 | orchestrator | 2025-09-08 00:47:24.112155 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-08 00:47:24.112167 | orchestrator | Monday 08 September 2025 00:43:31 +0000 (0:00:00.929) 0:00:01.152 ****** 2025-09-08 00:47:24.112178 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.112191 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.112221 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.112234 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.112245 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.112257 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.112268 | orchestrator | 2025-09-08 00:47:24.112280 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-08 00:47:24.112292 | orchestrator | Monday 08 September 2025 00:43:32 +0000 (0:00:00.799) 0:00:01.951 ****** 2025-09-08 00:47:24.112303 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.112315 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.112326 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.112337 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.112349 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.112360 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.112372 | orchestrator | 2025-09-08 00:47:24.112383 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-08 00:47:24.112395 | orchestrator | Monday 08 September 2025 00:43:33 +0000 (0:00:00.934) 0:00:02.886 ****** 2025-09-08 00:47:24.112406 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.112418 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.112429 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.112441 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.112454 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.112465 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.112476 | orchestrator | 2025-09-08 00:47:24.112488 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-08 00:47:24.112500 | orchestrator | Monday 08 September 2025 00:43:35 +0000 (0:00:02.036) 0:00:04.922 ****** 2025-09-08 00:47:24.112511 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.112522 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.112534 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.112545 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.112556 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.112568 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.112579 | orchestrator | 2025-09-08 00:47:24.112632 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-08 00:47:24.112646 | orchestrator | Monday 08 September 2025 00:43:36 +0000 (0:00:01.112) 0:00:06.034 ****** 2025-09-08 00:47:24.112659 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.112671 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.112684 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.112770 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.112787 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.112800 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.112887 | orchestrator | 2025-09-08 00:47:24.112901 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-08 00:47:24.112915 | orchestrator | Monday 08 September 2025 00:43:37 +0000 (0:00:00.867) 0:00:06.901 ****** 2025-09-08 00:47:24.112928 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.112940 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.112951 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.112973 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.112983 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.112994 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.113004 | orchestrator | 2025-09-08 00:47:24.113015 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-08 00:47:24.113027 | orchestrator | Monday 08 September 2025 00:43:38 +0000 (0:00:00.582) 0:00:07.484 ****** 2025-09-08 00:47:24.113038 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.113048 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.113059 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.113069 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.113079 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.113090 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.113100 | orchestrator | 2025-09-08 00:47:24.113111 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-08 00:47:24.113122 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:00.732) 0:00:08.216 ****** 2025-09-08 00:47:24.113133 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:24.113144 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:24.113154 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.113165 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:24.113176 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:24.113186 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.113197 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:24.113208 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:24.113228 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.113239 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:24.113263 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:24.113392 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.113409 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:24.113419 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:24.113430 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.113806 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 00:47:24.113824 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 00:47:24.113835 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.113846 | orchestrator | 2025-09-08 00:47:24.113857 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-08 00:47:24.113867 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:00.622) 0:00:08.839 ****** 2025-09-08 00:47:24.113878 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.113889 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.113900 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.113910 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.113921 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.113932 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.113942 | orchestrator | 2025-09-08 00:47:24.113953 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-08 00:47:24.113965 | orchestrator | Monday 08 September 2025 00:43:40 +0000 (0:00:01.115) 0:00:09.954 ****** 2025-09-08 00:47:24.113976 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.113987 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.113998 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.114009 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.114069 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.114091 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.114102 | orchestrator | 2025-09-08 00:47:24.114167 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-08 00:47:24.114181 | orchestrator | Monday 08 September 2025 00:43:41 +0000 (0:00:00.821) 0:00:10.775 ****** 2025-09-08 00:47:24.114193 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.114204 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.114216 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.114227 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.114239 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.114250 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.114262 | orchestrator | 2025-09-08 00:47:24.114274 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-08 00:47:24.114285 | orchestrator | Monday 08 September 2025 00:43:46 +0000 (0:00:05.270) 0:00:16.046 ****** 2025-09-08 00:47:24.114297 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.114308 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.114320 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.114331 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.114343 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.114354 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.114366 | orchestrator | 2025-09-08 00:47:24.114378 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-08 00:47:24.114390 | orchestrator | Monday 08 September 2025 00:43:48 +0000 (0:00:01.555) 0:00:17.601 ****** 2025-09-08 00:47:24.114401 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.114413 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.114424 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.114435 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.114447 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.114458 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.114470 | orchestrator | 2025-09-08 00:47:24.114482 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-08 00:47:24.114495 | orchestrator | Monday 08 September 2025 00:43:50 +0000 (0:00:02.093) 0:00:19.694 ****** 2025-09-08 00:47:24.114507 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.114519 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.114530 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.114542 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.114554 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.114565 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.114576 | orchestrator | 2025-09-08 00:47:24.114588 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-08 00:47:24.114657 | orchestrator | Monday 08 September 2025 00:43:52 +0000 (0:00:01.611) 0:00:21.306 ****** 2025-09-08 00:47:24.114669 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-08 00:47:24.114680 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-08 00:47:24.114691 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-08 00:47:24.114702 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-08 00:47:24.114713 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-08 00:47:24.114724 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-08 00:47:24.114734 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-08 00:47:24.114745 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-08 00:47:24.114756 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-08 00:47:24.114767 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-08 00:47:24.114777 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-08 00:47:24.114788 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-08 00:47:24.114799 | orchestrator | 2025-09-08 00:47:24.114810 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-08 00:47:24.114831 | orchestrator | Monday 08 September 2025 00:43:54 +0000 (0:00:02.609) 0:00:23.915 ****** 2025-09-08 00:47:24.114842 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.114853 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.114871 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.114883 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.114893 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.114902 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.114912 | orchestrator | 2025-09-08 00:47:24.114957 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-08 00:47:24.114969 | orchestrator | 2025-09-08 00:47:24.114979 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-08 00:47:24.114988 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:03.157) 0:00:27.073 ****** 2025-09-08 00:47:24.114998 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.115007 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.115016 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.115026 | orchestrator | 2025-09-08 00:47:24.115035 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-08 00:47:24.115045 | orchestrator | Monday 08 September 2025 00:44:00 +0000 (0:00:02.180) 0:00:29.254 ****** 2025-09-08 00:47:24.115054 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.115064 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.115073 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.115082 | orchestrator | 2025-09-08 00:47:24.115092 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-08 00:47:24.115102 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:01.200) 0:00:30.454 ****** 2025-09-08 00:47:24.115111 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.115120 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.115130 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.115139 | orchestrator | 2025-09-08 00:47:24.115149 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-08 00:47:24.115158 | orchestrator | Monday 08 September 2025 00:44:02 +0000 (0:00:01.120) 0:00:31.575 ****** 2025-09-08 00:47:24.115167 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.115177 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.115186 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.115196 | orchestrator | 2025-09-08 00:47:24.115205 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-08 00:47:24.115215 | orchestrator | Monday 08 September 2025 00:44:03 +0000 (0:00:01.191) 0:00:32.766 ****** 2025-09-08 00:47:24.115224 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.115234 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.115243 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.115253 | orchestrator | 2025-09-08 00:47:24.115262 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-08 00:47:24.115272 | orchestrator | Monday 08 September 2025 00:44:04 +0000 (0:00:00.593) 0:00:33.360 ****** 2025-09-08 00:47:24.115281 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.115291 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.115300 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.115309 | orchestrator | 2025-09-08 00:47:24.115319 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-08 00:47:24.115328 | orchestrator | Monday 08 September 2025 00:44:04 +0000 (0:00:00.786) 0:00:34.146 ****** 2025-09-08 00:47:24.115338 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.115347 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.115357 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.115366 | orchestrator | 2025-09-08 00:47:24.115376 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-08 00:47:24.115386 | orchestrator | Monday 08 September 2025 00:44:06 +0000 (0:00:01.688) 0:00:35.835 ****** 2025-09-08 00:47:24.115395 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-08 00:47:24.115411 | orchestrator | 2025-09-08 00:47:24.115421 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-08 00:47:24.115430 | orchestrator | Monday 08 September 2025 00:44:07 +0000 (0:00:00.559) 0:00:36.395 ****** 2025-09-08 00:47:24.115440 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.115449 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.115458 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.115468 | orchestrator | 2025-09-08 00:47:24.115477 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-08 00:47:24.115487 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:01.929) 0:00:38.324 ****** 2025-09-08 00:47:24.115496 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.115506 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.115515 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.115524 | orchestrator | 2025-09-08 00:47:24.115534 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-08 00:47:24.115543 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:00.672) 0:00:38.996 ****** 2025-09-08 00:47:24.115553 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.115562 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.115572 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.115581 | orchestrator | 2025-09-08 00:47:24.115609 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-08 00:47:24.115619 | orchestrator | Monday 08 September 2025 00:44:10 +0000 (0:00:01.124) 0:00:40.121 ****** 2025-09-08 00:47:24.115628 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.115638 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.115647 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.115656 | orchestrator | 2025-09-08 00:47:24.115666 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-08 00:47:24.115675 | orchestrator | Monday 08 September 2025 00:44:12 +0000 (0:00:01.732) 0:00:41.853 ****** 2025-09-08 00:47:24.115685 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.115694 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.115704 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.115713 | orchestrator | 2025-09-08 00:47:24.115723 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-08 00:47:24.115732 | orchestrator | Monday 08 September 2025 00:44:13 +0000 (0:00:00.494) 0:00:42.347 ****** 2025-09-08 00:47:24.115742 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.115751 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.115761 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.115770 | orchestrator | 2025-09-08 00:47:24.115780 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-08 00:47:24.115807 | orchestrator | Monday 08 September 2025 00:44:13 +0000 (0:00:00.578) 0:00:42.926 ****** 2025-09-08 00:47:24.115817 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.115827 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.115836 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.115846 | orchestrator | 2025-09-08 00:47:24.115884 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-08 00:47:24.115896 | orchestrator | Monday 08 September 2025 00:44:15 +0000 (0:00:01.576) 0:00:44.503 ****** 2025-09-08 00:47:24.115906 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-08 00:47:24.115916 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-08 00:47:24.115926 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-08 00:47:24.115935 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-08 00:47:24.115953 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-08 00:47:24.115963 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-08 00:47:24.115973 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-08 00:47:24.115982 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-08 00:47:24.115992 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-08 00:47:24.116002 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-08 00:47:24.116011 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-08 00:47:24.116021 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-08 00:47:24.116030 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-08 00:47:24.116040 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-08 00:47:24.116050 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-08 00:47:24.116059 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.116069 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.116079 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.116088 | orchestrator | 2025-09-08 00:47:24.116098 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-08 00:47:24.116108 | orchestrator | Monday 08 September 2025 00:45:11 +0000 (0:00:56.296) 0:01:40.800 ****** 2025-09-08 00:47:24.116117 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.116127 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.116136 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.116146 | orchestrator | 2025-09-08 00:47:24.116156 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-08 00:47:24.116165 | orchestrator | Monday 08 September 2025 00:45:11 +0000 (0:00:00.382) 0:01:41.182 ****** 2025-09-08 00:47:24.116175 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116184 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116194 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116203 | orchestrator | 2025-09-08 00:47:24.116213 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-08 00:47:24.116223 | orchestrator | Monday 08 September 2025 00:45:13 +0000 (0:00:01.541) 0:01:42.724 ****** 2025-09-08 00:47:24.116232 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116242 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116251 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116261 | orchestrator | 2025-09-08 00:47:24.116270 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-08 00:47:24.116280 | orchestrator | Monday 08 September 2025 00:45:15 +0000 (0:00:01.589) 0:01:44.313 ****** 2025-09-08 00:47:24.116290 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116299 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116309 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116318 | orchestrator | 2025-09-08 00:47:24.116328 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-08 00:47:24.116337 | orchestrator | Monday 08 September 2025 00:45:42 +0000 (0:00:27.308) 0:02:11.622 ****** 2025-09-08 00:47:24.116353 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.116363 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.116372 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.116382 | orchestrator | 2025-09-08 00:47:24.116391 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-08 00:47:24.116405 | orchestrator | Monday 08 September 2025 00:45:43 +0000 (0:00:00.749) 0:02:12.372 ****** 2025-09-08 00:47:24.116415 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.116425 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.116434 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.116444 | orchestrator | 2025-09-08 00:47:24.116479 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-08 00:47:24.116491 | orchestrator | Monday 08 September 2025 00:45:44 +0000 (0:00:00.906) 0:02:13.279 ****** 2025-09-08 00:47:24.116501 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116510 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116520 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116529 | orchestrator | 2025-09-08 00:47:24.116539 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-08 00:47:24.116549 | orchestrator | Monday 08 September 2025 00:45:44 +0000 (0:00:00.655) 0:02:13.934 ****** 2025-09-08 00:47:24.116558 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.116568 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.116577 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.116586 | orchestrator | 2025-09-08 00:47:24.116612 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-08 00:47:24.116622 | orchestrator | Monday 08 September 2025 00:45:45 +0000 (0:00:00.763) 0:02:14.698 ****** 2025-09-08 00:47:24.116631 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.116641 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.116650 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.116660 | orchestrator | 2025-09-08 00:47:24.116669 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-08 00:47:24.116679 | orchestrator | Monday 08 September 2025 00:45:45 +0000 (0:00:00.382) 0:02:15.080 ****** 2025-09-08 00:47:24.116689 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116698 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116708 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116717 | orchestrator | 2025-09-08 00:47:24.116727 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-08 00:47:24.116737 | orchestrator | Monday 08 September 2025 00:45:46 +0000 (0:00:00.993) 0:02:16.074 ****** 2025-09-08 00:47:24.116746 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116756 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116765 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116774 | orchestrator | 2025-09-08 00:47:24.116784 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-08 00:47:24.116793 | orchestrator | Monday 08 September 2025 00:45:47 +0000 (0:00:00.741) 0:02:16.816 ****** 2025-09-08 00:47:24.116803 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116812 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116822 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116831 | orchestrator | 2025-09-08 00:47:24.116841 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-08 00:47:24.116851 | orchestrator | Monday 08 September 2025 00:45:48 +0000 (0:00:00.952) 0:02:17.768 ****** 2025-09-08 00:47:24.116860 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.116870 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.116879 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.116888 | orchestrator | 2025-09-08 00:47:24.116898 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-08 00:47:24.116908 | orchestrator | Monday 08 September 2025 00:45:49 +0000 (0:00:00.971) 0:02:18.740 ****** 2025-09-08 00:47:24.116917 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.116935 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.116944 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.116954 | orchestrator | 2025-09-08 00:47:24.116963 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-08 00:47:24.116973 | orchestrator | Monday 08 September 2025 00:45:50 +0000 (0:00:00.513) 0:02:19.253 ****** 2025-09-08 00:47:24.116982 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.116992 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.117001 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.117012 | orchestrator | 2025-09-08 00:47:24.117028 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-08 00:47:24.117044 | orchestrator | Monday 08 September 2025 00:45:50 +0000 (0:00:00.305) 0:02:19.559 ****** 2025-09-08 00:47:24.117061 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.117077 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.117095 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.117112 | orchestrator | 2025-09-08 00:47:24.117129 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-08 00:47:24.117141 | orchestrator | Monday 08 September 2025 00:45:51 +0000 (0:00:00.751) 0:02:20.311 ****** 2025-09-08 00:47:24.117151 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.117161 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.117170 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.117180 | orchestrator | 2025-09-08 00:47:24.117190 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-08 00:47:24.117200 | orchestrator | Monday 08 September 2025 00:45:51 +0000 (0:00:00.730) 0:02:21.041 ****** 2025-09-08 00:47:24.117209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-08 00:47:24.117219 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-08 00:47:24.117229 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-08 00:47:24.117239 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-08 00:47:24.117248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-08 00:47:24.117258 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-08 00:47:24.117267 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-08 00:47:24.117282 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-08 00:47:24.117292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-08 00:47:24.117308 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-08 00:47:24.117317 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-08 00:47:24.117327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-08 00:47:24.117337 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-08 00:47:24.117346 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-08 00:47:24.117356 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-08 00:47:24.117365 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-08 00:47:24.117375 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-08 00:47:24.117384 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-08 00:47:24.117491 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-08 00:47:24.117505 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-08 00:47:24.117514 | orchestrator | 2025-09-08 00:47:24.117524 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-08 00:47:24.117533 | orchestrator | 2025-09-08 00:47:24.117543 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-08 00:47:24.117552 | orchestrator | Monday 08 September 2025 00:45:55 +0000 (0:00:03.985) 0:02:25.028 ****** 2025-09-08 00:47:24.117562 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.117571 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.117581 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.117732 | orchestrator | 2025-09-08 00:47:24.117916 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-08 00:47:24.117937 | orchestrator | Monday 08 September 2025 00:45:56 +0000 (0:00:00.373) 0:02:25.401 ****** 2025-09-08 00:47:24.117950 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.117963 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.117974 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.117986 | orchestrator | 2025-09-08 00:47:24.117998 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-08 00:47:24.118009 | orchestrator | Monday 08 September 2025 00:45:56 +0000 (0:00:00.644) 0:02:26.046 ****** 2025-09-08 00:47:24.118080 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.118091 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.118102 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.118113 | orchestrator | 2025-09-08 00:47:24.118124 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-08 00:47:24.118136 | orchestrator | Monday 08 September 2025 00:45:57 +0000 (0:00:00.372) 0:02:26.418 ****** 2025-09-08 00:47:24.118148 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:47:24.118160 | orchestrator | 2025-09-08 00:47:24.118171 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-08 00:47:24.118182 | orchestrator | Monday 08 September 2025 00:45:57 +0000 (0:00:00.712) 0:02:27.130 ****** 2025-09-08 00:47:24.118193 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.118205 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.118215 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.118658 | orchestrator | 2025-09-08 00:47:24.118685 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-08 00:47:24.118697 | orchestrator | Monday 08 September 2025 00:45:58 +0000 (0:00:00.333) 0:02:27.464 ****** 2025-09-08 00:47:24.118708 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.118719 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.118730 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.118741 | orchestrator | 2025-09-08 00:47:24.118752 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-08 00:47:24.118763 | orchestrator | Monday 08 September 2025 00:45:58 +0000 (0:00:00.290) 0:02:27.754 ****** 2025-09-08 00:47:24.118774 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.118784 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.118795 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.118806 | orchestrator | 2025-09-08 00:47:24.118816 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-08 00:47:24.118827 | orchestrator | Monday 08 September 2025 00:45:59 +0000 (0:00:00.533) 0:02:28.288 ****** 2025-09-08 00:47:24.118838 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.118849 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.118860 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.118871 | orchestrator | 2025-09-08 00:47:24.118882 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-08 00:47:24.118892 | orchestrator | Monday 08 September 2025 00:45:59 +0000 (0:00:00.662) 0:02:28.951 ****** 2025-09-08 00:47:24.118937 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.118949 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.118960 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.118970 | orchestrator | 2025-09-08 00:47:24.118981 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-08 00:47:24.118992 | orchestrator | Monday 08 September 2025 00:46:00 +0000 (0:00:01.132) 0:02:30.083 ****** 2025-09-08 00:47:24.119002 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.119013 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.119024 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.119034 | orchestrator | 2025-09-08 00:47:24.119045 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-08 00:47:24.119074 | orchestrator | Monday 08 September 2025 00:46:02 +0000 (0:00:01.245) 0:02:31.328 ****** 2025-09-08 00:47:24.119086 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.119096 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.119107 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.119118 | orchestrator | 2025-09-08 00:47:24.119161 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-08 00:47:24.119173 | orchestrator | 2025-09-08 00:47:24.119184 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-08 00:47:24.119195 | orchestrator | Monday 08 September 2025 00:46:14 +0000 (0:00:12.153) 0:02:43.481 ****** 2025-09-08 00:47:24.119206 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:24.119217 | orchestrator | 2025-09-08 00:47:24.119228 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-08 00:47:24.119241 | orchestrator | Monday 08 September 2025 00:46:14 +0000 (0:00:00.705) 0:02:44.187 ****** 2025-09-08 00:47:24.119253 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.119266 | orchestrator | 2025-09-08 00:47:24.119279 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-08 00:47:24.119292 | orchestrator | Monday 08 September 2025 00:46:15 +0000 (0:00:00.569) 0:02:44.756 ****** 2025-09-08 00:47:24.119306 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-08 00:47:24.119318 | orchestrator | 2025-09-08 00:47:24.119331 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-08 00:47:24.119344 | orchestrator | Monday 08 September 2025 00:46:16 +0000 (0:00:00.596) 0:02:45.353 ****** 2025-09-08 00:47:24.119356 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.119369 | orchestrator | 2025-09-08 00:47:24.119382 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-08 00:47:24.119394 | orchestrator | Monday 08 September 2025 00:46:17 +0000 (0:00:01.159) 0:02:46.513 ****** 2025-09-08 00:47:24.119407 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.119419 | orchestrator | 2025-09-08 00:47:24.119432 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-08 00:47:24.119446 | orchestrator | Monday 08 September 2025 00:46:18 +0000 (0:00:01.171) 0:02:47.684 ****** 2025-09-08 00:47:24.119459 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:47:24.119472 | orchestrator | 2025-09-08 00:47:24.119485 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-08 00:47:24.119497 | orchestrator | Monday 08 September 2025 00:46:20 +0000 (0:00:01.933) 0:02:49.618 ****** 2025-09-08 00:47:24.119510 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:47:24.119523 | orchestrator | 2025-09-08 00:47:24.119536 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-08 00:47:24.119549 | orchestrator | Monday 08 September 2025 00:46:21 +0000 (0:00:00.925) 0:02:50.543 ****** 2025-09-08 00:47:24.119562 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.119574 | orchestrator | 2025-09-08 00:47:24.119587 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-08 00:47:24.119620 | orchestrator | Monday 08 September 2025 00:46:21 +0000 (0:00:00.501) 0:02:51.044 ****** 2025-09-08 00:47:24.119640 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.119651 | orchestrator | 2025-09-08 00:47:24.119662 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-08 00:47:24.119673 | orchestrator | 2025-09-08 00:47:24.119683 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-08 00:47:24.119694 | orchestrator | Monday 08 September 2025 00:46:22 +0000 (0:00:00.420) 0:02:51.465 ****** 2025-09-08 00:47:24.119705 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:24.119715 | orchestrator | 2025-09-08 00:47:24.119726 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-08 00:47:24.119737 | orchestrator | Monday 08 September 2025 00:46:22 +0000 (0:00:00.163) 0:02:51.628 ****** 2025-09-08 00:47:24.119748 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:47:24.119759 | orchestrator | 2025-09-08 00:47:24.119770 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-08 00:47:24.119780 | orchestrator | Monday 08 September 2025 00:46:22 +0000 (0:00:00.227) 0:02:51.856 ****** 2025-09-08 00:47:24.119791 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:24.119802 | orchestrator | 2025-09-08 00:47:24.119813 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-08 00:47:24.119823 | orchestrator | Monday 08 September 2025 00:46:23 +0000 (0:00:00.871) 0:02:52.728 ****** 2025-09-08 00:47:24.119834 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:24.119845 | orchestrator | 2025-09-08 00:47:24.119856 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-08 00:47:24.119867 | orchestrator | Monday 08 September 2025 00:46:25 +0000 (0:00:02.262) 0:02:54.990 ****** 2025-09-08 00:47:24.119878 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.119889 | orchestrator | 2025-09-08 00:47:24.119899 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-08 00:47:24.119910 | orchestrator | Monday 08 September 2025 00:46:26 +0000 (0:00:00.867) 0:02:55.857 ****** 2025-09-08 00:47:24.119921 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:24.119932 | orchestrator | 2025-09-08 00:47:24.119943 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-08 00:47:24.119954 | orchestrator | Monday 08 September 2025 00:46:27 +0000 (0:00:00.448) 0:02:56.306 ****** 2025-09-08 00:47:24.119965 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.119975 | orchestrator | 2025-09-08 00:47:24.119986 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-08 00:47:24.119997 | orchestrator | Monday 08 September 2025 00:46:34 +0000 (0:00:07.392) 0:03:03.699 ****** 2025-09-08 00:47:24.120008 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.120019 | orchestrator | 2025-09-08 00:47:24.120029 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-08 00:47:24.120040 | orchestrator | Monday 08 September 2025 00:46:47 +0000 (0:00:13.117) 0:03:16.817 ****** 2025-09-08 00:47:24.120051 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:24.120062 | orchestrator | 2025-09-08 00:47:24.120078 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-08 00:47:24.120090 | orchestrator | 2025-09-08 00:47:24.120100 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-08 00:47:24.120120 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:00.499) 0:03:17.316 ****** 2025-09-08 00:47:24.120132 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.120143 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.120153 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.120164 | orchestrator | 2025-09-08 00:47:24.120175 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-08 00:47:24.120186 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:00.363) 0:03:17.679 ****** 2025-09-08 00:47:24.120197 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120207 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.120234 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.120245 | orchestrator | 2025-09-08 00:47:24.120256 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-08 00:47:24.120267 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:00.482) 0:03:18.161 ****** 2025-09-08 00:47:24.120278 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-08 00:47:24.120289 | orchestrator | 2025-09-08 00:47:24.120300 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-08 00:47:24.120311 | orchestrator | Monday 08 September 2025 00:46:49 +0000 (0:00:00.739) 0:03:18.901 ****** 2025-09-08 00:47:24.120322 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120333 | orchestrator | 2025-09-08 00:47:24.120344 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-08 00:47:24.120354 | orchestrator | Monday 08 September 2025 00:46:49 +0000 (0:00:00.173) 0:03:19.075 ****** 2025-09-08 00:47:24.120365 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120376 | orchestrator | 2025-09-08 00:47:24.120387 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-08 00:47:24.120398 | orchestrator | Monday 08 September 2025 00:46:50 +0000 (0:00:00.185) 0:03:19.260 ****** 2025-09-08 00:47:24.120409 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120420 | orchestrator | 2025-09-08 00:47:24.120430 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-08 00:47:24.120441 | orchestrator | Monday 08 September 2025 00:46:50 +0000 (0:00:00.245) 0:03:19.505 ****** 2025-09-08 00:47:24.120452 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120463 | orchestrator | 2025-09-08 00:47:24.120474 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-08 00:47:24.120485 | orchestrator | Monday 08 September 2025 00:46:50 +0000 (0:00:00.572) 0:03:20.078 ****** 2025-09-08 00:47:24.120496 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120507 | orchestrator | 2025-09-08 00:47:24.120517 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-08 00:47:24.120528 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.204) 0:03:20.283 ****** 2025-09-08 00:47:24.120539 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120550 | orchestrator | 2025-09-08 00:47:24.120561 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-08 00:47:24.120572 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.194) 0:03:20.477 ****** 2025-09-08 00:47:24.120583 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120610 | orchestrator | 2025-09-08 00:47:24.120622 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-08 00:47:24.120633 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.209) 0:03:20.686 ****** 2025-09-08 00:47:24.120643 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120654 | orchestrator | 2025-09-08 00:47:24.120665 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-08 00:47:24.120676 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.267) 0:03:20.954 ****** 2025-09-08 00:47:24.120686 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120697 | orchestrator | 2025-09-08 00:47:24.120708 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-08 00:47:24.120719 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.191) 0:03:21.145 ****** 2025-09-08 00:47:24.120730 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-08 00:47:24.120741 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-08 00:47:24.120752 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120763 | orchestrator | 2025-09-08 00:47:24.120774 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-08 00:47:24.120785 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.283) 0:03:21.429 ****** 2025-09-08 00:47:24.120795 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120813 | orchestrator | 2025-09-08 00:47:24.120824 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-08 00:47:24.120835 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.186) 0:03:21.616 ****** 2025-09-08 00:47:24.120845 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120856 | orchestrator | 2025-09-08 00:47:24.120867 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-08 00:47:24.120878 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.192) 0:03:21.809 ****** 2025-09-08 00:47:24.120889 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120899 | orchestrator | 2025-09-08 00:47:24.120910 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-08 00:47:24.120921 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.177) 0:03:21.987 ****** 2025-09-08 00:47:24.120932 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120943 | orchestrator | 2025-09-08 00:47:24.120953 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-08 00:47:24.120964 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:00.171) 0:03:22.158 ****** 2025-09-08 00:47:24.120975 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.120986 | orchestrator | 2025-09-08 00:47:24.120996 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-08 00:47:24.121007 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.188) 0:03:22.346 ****** 2025-09-08 00:47:24.121018 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121029 | orchestrator | 2025-09-08 00:47:24.121040 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-08 00:47:24.121058 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.534) 0:03:22.880 ****** 2025-09-08 00:47:24.121069 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121080 | orchestrator | 2025-09-08 00:47:24.121091 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-08 00:47:24.121102 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.204) 0:03:23.085 ****** 2025-09-08 00:47:24.121112 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121123 | orchestrator | 2025-09-08 00:47:24.121143 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-08 00:47:24.121154 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.201) 0:03:23.287 ****** 2025-09-08 00:47:24.121164 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121175 | orchestrator | 2025-09-08 00:47:24.121186 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-08 00:47:24.121197 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.183) 0:03:23.471 ****** 2025-09-08 00:47:24.121208 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121219 | orchestrator | 2025-09-08 00:47:24.121230 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-08 00:47:24.121240 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.191) 0:03:23.662 ****** 2025-09-08 00:47:24.121251 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121262 | orchestrator | 2025-09-08 00:47:24.121273 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-08 00:47:24.121284 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:00.181) 0:03:23.844 ****** 2025-09-08 00:47:24.121295 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-08 00:47:24.121306 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-08 00:47:24.121317 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-08 00:47:24.121328 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-08 00:47:24.121339 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121350 | orchestrator | 2025-09-08 00:47:24.121361 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-08 00:47:24.121372 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.417) 0:03:24.262 ****** 2025-09-08 00:47:24.121390 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121401 | orchestrator | 2025-09-08 00:47:24.121412 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-08 00:47:24.121423 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.186) 0:03:24.448 ****** 2025-09-08 00:47:24.121434 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121445 | orchestrator | 2025-09-08 00:47:24.121455 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-08 00:47:24.121466 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.195) 0:03:24.644 ****** 2025-09-08 00:47:24.121477 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121488 | orchestrator | 2025-09-08 00:47:24.121499 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-08 00:47:24.121510 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.179) 0:03:24.823 ****** 2025-09-08 00:47:24.121520 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121531 | orchestrator | 2025-09-08 00:47:24.121542 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-08 00:47:24.121553 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.183) 0:03:25.006 ****** 2025-09-08 00:47:24.121564 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-08 00:47:24.121574 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-08 00:47:24.121585 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121639 | orchestrator | 2025-09-08 00:47:24.121651 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-08 00:47:24.121662 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:00.519) 0:03:25.526 ****** 2025-09-08 00:47:24.121672 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.121683 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.121694 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.121704 | orchestrator | 2025-09-08 00:47:24.121715 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-08 00:47:24.121726 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:00.861) 0:03:26.387 ****** 2025-09-08 00:47:24.121736 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.121747 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.121758 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.121769 | orchestrator | 2025-09-08 00:47:24.121779 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-08 00:47:24.121790 | orchestrator | 2025-09-08 00:47:24.121801 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-08 00:47:24.121812 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.961) 0:03:27.349 ****** 2025-09-08 00:47:24.121822 | orchestrator | ok: [testbed-manager] 2025-09-08 00:47:24.121833 | orchestrator | 2025-09-08 00:47:24.121843 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-08 00:47:24.121854 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.125) 0:03:27.474 ****** 2025-09-08 00:47:24.121865 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-08 00:47:24.121876 | orchestrator | 2025-09-08 00:47:24.121887 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-08 00:47:24.121897 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.315) 0:03:27.789 ****** 2025-09-08 00:47:24.121908 | orchestrator | changed: [testbed-manager] 2025-09-08 00:47:24.121919 | orchestrator | 2025-09-08 00:47:24.121942 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-08 00:47:24.121953 | orchestrator | 2025-09-08 00:47:24.121964 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-08 00:47:24.121981 | orchestrator | Monday 08 September 2025 00:47:05 +0000 (0:00:06.649) 0:03:34.439 ****** 2025-09-08 00:47:24.121993 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.122011 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.122057 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.122069 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.122084 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.122103 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.122123 | orchestrator | 2025-09-08 00:47:24.122141 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-08 00:47:24.122160 | orchestrator | Monday 08 September 2025 00:47:05 +0000 (0:00:00.707) 0:03:35.147 ****** 2025-09-08 00:47:24.122179 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-08 00:47:24.122199 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-08 00:47:24.122218 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-08 00:47:24.122236 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-08 00:47:24.122254 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-08 00:47:24.122273 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-08 00:47:24.122293 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-08 00:47:24.122313 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-08 00:47:24.122330 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-08 00:47:24.122347 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-08 00:47:24.122358 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-08 00:47:24.122369 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-08 00:47:24.122380 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-08 00:47:24.122391 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-08 00:47:24.122402 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-08 00:47:24.122413 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-08 00:47:24.122424 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-08 00:47:24.122435 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-08 00:47:24.122446 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-08 00:47:24.122457 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-08 00:47:24.122468 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-08 00:47:24.122478 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-08 00:47:24.122489 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-08 00:47:24.122500 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-08 00:47:24.122511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-08 00:47:24.122522 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-08 00:47:24.122533 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-08 00:47:24.122544 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-08 00:47:24.122554 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-08 00:47:24.122575 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-08 00:47:24.122586 | orchestrator | 2025-09-08 00:47:24.122655 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-08 00:47:24.122666 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:14.759) 0:03:49.906 ****** 2025-09-08 00:47:24.122677 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.122688 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.122699 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.122710 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.122721 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.122731 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.122742 | orchestrator | 2025-09-08 00:47:24.122753 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-08 00:47:24.122764 | orchestrator | Monday 08 September 2025 00:47:21 +0000 (0:00:00.542) 0:03:50.448 ****** 2025-09-08 00:47:24.122775 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.122786 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.122797 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.122806 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.122815 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.122825 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.122834 | orchestrator | 2025-09-08 00:47:24.122850 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:47:24.122870 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:47:24.122884 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-08 00:47:24.122894 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-08 00:47:24.122904 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-08 00:47:24.122914 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-08 00:47:24.122924 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-08 00:47:24.122933 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-08 00:47:24.122943 | orchestrator | 2025-09-08 00:47:24.122952 | orchestrator | 2025-09-08 00:47:24.122962 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:47:24.122972 | orchestrator | Monday 08 September 2025 00:47:21 +0000 (0:00:00.580) 0:03:51.028 ****** 2025-09-08 00:47:24.122982 | orchestrator | =============================================================================== 2025-09-08 00:47:24.122991 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.30s 2025-09-08 00:47:24.123002 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.31s 2025-09-08 00:47:24.123012 | orchestrator | Manage labels ---------------------------------------------------------- 14.76s 2025-09-08 00:47:24.123021 | orchestrator | kubectl : Install required packages ------------------------------------ 13.12s 2025-09-08 00:47:24.123031 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.15s 2025-09-08 00:47:24.123040 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.39s 2025-09-08 00:47:24.123050 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.65s 2025-09-08 00:47:24.123059 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.27s 2025-09-08 00:47:24.123076 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.99s 2025-09-08 00:47:24.123086 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 3.16s 2025-09-08 00:47:24.123095 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.61s 2025-09-08 00:47:24.123105 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.26s 2025-09-08 00:47:24.123115 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.18s 2025-09-08 00:47:24.123124 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.09s 2025-09-08 00:47:24.123133 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.04s 2025-09-08 00:47:24.123143 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.93s 2025-09-08 00:47:24.123152 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.93s 2025-09-08 00:47:24.123162 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.73s 2025-09-08 00:47:24.123171 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.69s 2025-09-08 00:47:24.123181 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.61s 2025-09-08 00:47:24.123191 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:24.123200 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task 60ba63ab-1707-447f-8e79-b5f43aa630bd is in state SUCCESS 2025-09-08 00:47:24.123210 | orchestrator | 2025-09-08 00:47:24.123219 | orchestrator | 2025-09-08 00:47:24.123229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:47:24.123239 | orchestrator | 2025-09-08 00:47:24.123248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:47:24.123258 | orchestrator | Monday 08 September 2025 00:46:08 +0000 (0:00:00.281) 0:00:00.281 ****** 2025-09-08 00:47:24.123267 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.123277 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.123287 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.123296 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.123306 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.123315 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.123325 | orchestrator | 2025-09-08 00:47:24.123334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:47:24.123344 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:01.096) 0:00:01.378 ****** 2025-09-08 00:47:24.123358 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:24.123368 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:24.123383 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:24.123393 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:24.123403 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:24.123412 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-08 00:47:24.123422 | orchestrator | 2025-09-08 00:47:24.123432 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-08 00:47:24.123441 | orchestrator | 2025-09-08 00:47:24.123451 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-08 00:47:24.123460 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:00.553) 0:00:01.931 ****** 2025-09-08 00:47:24.123470 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:47:24.123488 | orchestrator | 2025-09-08 00:47:24.123498 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-08 00:47:24.123507 | orchestrator | Monday 08 September 2025 00:46:11 +0000 (0:00:00.969) 0:00:02.901 ****** 2025-09-08 00:47:24.123517 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-08 00:47:24.123527 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-08 00:47:24.123537 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-08 00:47:24.123546 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-08 00:47:24.123556 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-08 00:47:24.123565 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-08 00:47:24.123575 | orchestrator | 2025-09-08 00:47:24.123585 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-08 00:47:24.123612 | orchestrator | Monday 08 September 2025 00:46:12 +0000 (0:00:01.047) 0:00:03.948 ****** 2025-09-08 00:47:24.123622 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-08 00:47:24.123632 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-08 00:47:24.123641 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-08 00:47:24.123651 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-08 00:47:24.123660 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-08 00:47:24.123670 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-08 00:47:24.123679 | orchestrator | 2025-09-08 00:47:24.123689 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-08 00:47:24.123699 | orchestrator | Monday 08 September 2025 00:46:14 +0000 (0:00:01.550) 0:00:05.499 ****** 2025-09-08 00:47:24.123708 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-08 00:47:24.123718 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.123727 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-08 00:47:24.123737 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.123747 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-08 00:47:24.123756 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.123766 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-08 00:47:24.123775 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.123785 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-08 00:47:24.123794 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.123804 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-08 00:47:24.123813 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.123823 | orchestrator | 2025-09-08 00:47:24.123833 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-08 00:47:24.123842 | orchestrator | Monday 08 September 2025 00:46:15 +0000 (0:00:01.580) 0:00:07.079 ****** 2025-09-08 00:47:24.123852 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.123861 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.123871 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.123880 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.123890 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.123900 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.123909 | orchestrator | 2025-09-08 00:47:24.123919 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-08 00:47:24.123928 | orchestrator | Monday 08 September 2025 00:46:16 +0000 (0:00:00.775) 0:00:07.855 ****** 2025-09-08 00:47:24.123941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.123980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.123992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124139 | orchestrator | 2025-09-08 00:47:24.124149 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-08 00:47:24.124159 | orchestrator | Monday 08 September 2025 00:46:18 +0000 (0:00:02.045) 0:00:09.901 ****** 2025-09-08 00:47:24.124181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124337 | orchestrator | 2025-09-08 00:47:24.124353 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-08 00:47:24.124363 | orchestrator | Monday 08 September 2025 00:46:22 +0000 (0:00:03.629) 0:00:13.530 ****** 2025-09-08 00:47:24.124373 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.124383 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.124392 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.124402 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:47:24.124412 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:47:24.124421 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:47:24.124431 | orchestrator | 2025-09-08 00:47:24.124440 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-08 00:47:24.124450 | orchestrator | Monday 08 September 2025 00:46:23 +0000 (0:00:01.287) 0:00:14.817 ****** 2025-09-08 00:47:24.124460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124565 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-08 00:47:24.124636 | orchestrator | 2025-09-08 00:47:24.124646 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:24.124656 | orchestrator | Monday 08 September 2025 00:46:27 +0000 (0:00:04.414) 0:00:19.232 ****** 2025-09-08 00:47:24.124666 | orchestrator | 2025-09-08 00:47:24.124676 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:24.124686 | orchestrator | Monday 08 September 2025 00:46:28 +0000 (0:00:00.385) 0:00:19.617 ****** 2025-09-08 00:47:24.124695 | orchestrator | 2025-09-08 00:47:24.124705 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:24.124715 | orchestrator | Monday 08 September 2025 00:46:28 +0000 (0:00:00.230) 0:00:19.847 ****** 2025-09-08 00:47:24.124725 | orchestrator | 2025-09-08 00:47:24.124734 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:24.124744 | orchestrator | Monday 08 September 2025 00:46:28 +0000 (0:00:00.213) 0:00:20.061 ****** 2025-09-08 00:47:24.124754 | orchestrator | 2025-09-08 00:47:24.124763 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:24.124779 | orchestrator | Monday 08 September 2025 00:46:28 +0000 (0:00:00.211) 0:00:20.273 ****** 2025-09-08 00:47:24.124788 | orchestrator | 2025-09-08 00:47:24.124798 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-08 00:47:24.124808 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.318) 0:00:20.591 ****** 2025-09-08 00:47:24.124817 | orchestrator | 2025-09-08 00:47:24.124827 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-08 00:47:24.124837 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.262) 0:00:20.854 ****** 2025-09-08 00:47:24.124846 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.124856 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.124866 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.124875 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.124885 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.124895 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.124904 | orchestrator | 2025-09-08 00:47:24.124914 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-08 00:47:24.124924 | orchestrator | Monday 08 September 2025 00:46:41 +0000 (0:00:11.735) 0:00:32.590 ****** 2025-09-08 00:47:24.124933 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:47:24.124943 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:47:24.124953 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:47:24.124962 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:47:24.124972 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:47:24.124982 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:47:24.124991 | orchestrator | 2025-09-08 00:47:24.125001 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-08 00:47:24.125011 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:01.269) 0:00:33.859 ****** 2025-09-08 00:47:24.125020 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.125030 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.125040 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.125049 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.125059 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.125069 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.125078 | orchestrator | 2025-09-08 00:47:24.125088 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-08 00:47:24.125098 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:11.185) 0:00:45.045 ****** 2025-09-08 00:47:24.125107 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-08 00:47:24.125117 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-08 00:47:24.125127 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-08 00:47:24.125137 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-08 00:47:24.125147 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-08 00:47:24.125156 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-08 00:47:24.125170 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-08 00:47:24.125186 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-08 00:47:24.125196 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-08 00:47:24.125205 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-08 00:47:24.125215 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-08 00:47:24.125230 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-08 00:47:24.125240 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:24.125250 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:24.125260 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:24.125269 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:24.125279 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:24.125288 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-08 00:47:24.125298 | orchestrator | 2025-09-08 00:47:24.125308 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-08 00:47:24.125318 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:07.950) 0:00:52.996 ****** 2025-09-08 00:47:24.125327 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-08 00:47:24.125337 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.125347 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-08 00:47:24.125356 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.125366 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-08 00:47:24.125375 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.125385 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-08 00:47:24.125395 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-08 00:47:24.125404 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-08 00:47:24.125414 | orchestrator | 2025-09-08 00:47:24.125424 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-08 00:47:24.125433 | orchestrator | Monday 08 September 2025 00:47:04 +0000 (0:00:02.957) 0:00:55.953 ****** 2025-09-08 00:47:24.125443 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-08 00:47:24.125453 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:47:24.125462 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-08 00:47:24.125472 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:47:24.125482 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-08 00:47:24.125491 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:47:24.125501 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-08 00:47:24.125510 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-08 00:47:24.125520 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-08 00:47:24.125530 | orchestrator | 2025-09-08 00:47:24.125539 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-08 00:47:24.125549 | orchestrator | Monday 08 September 2025 00:47:10 +0000 (0:00:06.138) 0:01:02.092 ****** 2025-09-08 00:47:24.125559 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:47:24.125568 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:47:24.125578 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:47:24.125588 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:47:24.125612 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:47:24.125621 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:47:24.125631 | orchestrator | 2025-09-08 00:47:24.125640 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:47:24.125651 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:47:24.125668 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:47:24.125678 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:47:24.125688 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:47:24.125698 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:47:24.125707 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:47:24.125717 | orchestrator | 2025-09-08 00:47:24.125726 | orchestrator | 2025-09-08 00:47:24.125736 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:47:24.125750 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:10.081) 0:01:12.174 ****** 2025-09-08 00:47:24.125760 | orchestrator | =============================================================================== 2025-09-08 00:47:24.125770 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.27s 2025-09-08 00:47:24.126496 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.74s 2025-09-08 00:47:24.126518 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.95s 2025-09-08 00:47:24.126528 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 6.14s 2025-09-08 00:47:24.126538 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.41s 2025-09-08 00:47:24.126548 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.63s 2025-09-08 00:47:24.126557 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.96s 2025-09-08 00:47:24.126567 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.05s 2025-09-08 00:47:24.126577 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.62s 2025-09-08 00:47:24.126586 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.58s 2025-09-08 00:47:24.126648 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.55s 2025-09-08 00:47:24.126658 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.29s 2025-09-08 00:47:24.126668 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.27s 2025-09-08 00:47:24.126677 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-09-08 00:47:24.126687 | orchestrator | module-load : Load modules ---------------------------------------------- 1.05s 2025-09-08 00:47:24.126697 | orchestrator | openvswitch : include_tasks --------------------------------------------- 0.97s 2025-09-08 00:47:24.126706 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.78s 2025-09-08 00:47:24.126721 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-09-08 00:47:24.126731 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task 53c3d814-89bd-4835-818f-2d7e987f54be is in state STARTED 2025-09-08 00:47:24.126741 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:24.126751 | orchestrator | 2025-09-08 00:47:24 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:24.126760 | orchestrator | 2025-09-08 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:27.277869 | orchestrator | 2025-09-08 00:47:27 | INFO  | Task fd649cfd-a01e-42c5-aeb3-66d13ad0bfbc is in state STARTED 2025-09-08 00:47:27.359566 | orchestrator | 2025-09-08 00:47:27 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:27.359678 | orchestrator | 2025-09-08 00:47:27 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:27.359693 | orchestrator | 2025-09-08 00:47:27 | INFO  | Task 53c3d814-89bd-4835-818f-2d7e987f54be is in state STARTED 2025-09-08 00:47:27.359704 | orchestrator | 2025-09-08 00:47:27 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:27.359715 | orchestrator | 2025-09-08 00:47:27 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:27.359727 | orchestrator | 2025-09-08 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:30.332799 | orchestrator | 2025-09-08 00:47:30 | INFO  | Task fd649cfd-a01e-42c5-aeb3-66d13ad0bfbc is in state SUCCESS 2025-09-08 00:47:30.332911 | orchestrator | 2025-09-08 00:47:30 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:30.332928 | orchestrator | 2025-09-08 00:47:30 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:30.332940 | orchestrator | 2025-09-08 00:47:30 | INFO  | Task 53c3d814-89bd-4835-818f-2d7e987f54be is in state STARTED 2025-09-08 00:47:30.332951 | orchestrator | 2025-09-08 00:47:30 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:30.332962 | orchestrator | 2025-09-08 00:47:30 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:30.332973 | orchestrator | 2025-09-08 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:33.413040 | orchestrator | 2025-09-08 00:47:33 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:33.415364 | orchestrator | 2025-09-08 00:47:33 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:33.419189 | orchestrator | 2025-09-08 00:47:33 | INFO  | Task 53c3d814-89bd-4835-818f-2d7e987f54be is in state STARTED 2025-09-08 00:47:33.419909 | orchestrator | 2025-09-08 00:47:33 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:33.423207 | orchestrator | 2025-09-08 00:47:33 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:33.423224 | orchestrator | 2025-09-08 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:36.457805 | orchestrator | 2025-09-08 00:47:36 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:36.458757 | orchestrator | 2025-09-08 00:47:36 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:36.459436 | orchestrator | 2025-09-08 00:47:36 | INFO  | Task 53c3d814-89bd-4835-818f-2d7e987f54be is in state SUCCESS 2025-09-08 00:47:36.460324 | orchestrator | 2025-09-08 00:47:36 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:36.462473 | orchestrator | 2025-09-08 00:47:36 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:36.462498 | orchestrator | 2025-09-08 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:39.492415 | orchestrator | 2025-09-08 00:47:39 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:39.496100 | orchestrator | 2025-09-08 00:47:39 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:39.496423 | orchestrator | 2025-09-08 00:47:39 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:39.498207 | orchestrator | 2025-09-08 00:47:39 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:39.498255 | orchestrator | 2025-09-08 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:42.536900 | orchestrator | 2025-09-08 00:47:42 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:42.538999 | orchestrator | 2025-09-08 00:47:42 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:42.539995 | orchestrator | 2025-09-08 00:47:42 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:42.543966 | orchestrator | 2025-09-08 00:47:42 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:42.543991 | orchestrator | 2025-09-08 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:45.589205 | orchestrator | 2025-09-08 00:47:45 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:45.590336 | orchestrator | 2025-09-08 00:47:45 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:45.591391 | orchestrator | 2025-09-08 00:47:45 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:45.593390 | orchestrator | 2025-09-08 00:47:45 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:45.593500 | orchestrator | 2025-09-08 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:48.639027 | orchestrator | 2025-09-08 00:47:48 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:48.640707 | orchestrator | 2025-09-08 00:47:48 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:48.644567 | orchestrator | 2025-09-08 00:47:48 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:48.647006 | orchestrator | 2025-09-08 00:47:48 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:48.647033 | orchestrator | 2025-09-08 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:51.681369 | orchestrator | 2025-09-08 00:47:51 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:51.683583 | orchestrator | 2025-09-08 00:47:51 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:51.685986 | orchestrator | 2025-09-08 00:47:51 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:51.687158 | orchestrator | 2025-09-08 00:47:51 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:51.687368 | orchestrator | 2025-09-08 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:54.785542 | orchestrator | 2025-09-08 00:47:54 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:54.787571 | orchestrator | 2025-09-08 00:47:54 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:54.790263 | orchestrator | 2025-09-08 00:47:54 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:54.793184 | orchestrator | 2025-09-08 00:47:54 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:54.793486 | orchestrator | 2025-09-08 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:47:57.837286 | orchestrator | 2025-09-08 00:47:57 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:47:57.840001 | orchestrator | 2025-09-08 00:47:57 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:47:57.842584 | orchestrator | 2025-09-08 00:47:57 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:47:57.844724 | orchestrator | 2025-09-08 00:47:57 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:47:57.844770 | orchestrator | 2025-09-08 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:00.881651 | orchestrator | 2025-09-08 00:48:00 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:00.882492 | orchestrator | 2025-09-08 00:48:00 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:00.884988 | orchestrator | 2025-09-08 00:48:00 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:00.886484 | orchestrator | 2025-09-08 00:48:00 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:00.886844 | orchestrator | 2025-09-08 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:03.933793 | orchestrator | 2025-09-08 00:48:03 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:03.933940 | orchestrator | 2025-09-08 00:48:03 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:03.936282 | orchestrator | 2025-09-08 00:48:03 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:03.936719 | orchestrator | 2025-09-08 00:48:03 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:03.936751 | orchestrator | 2025-09-08 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:06.970797 | orchestrator | 2025-09-08 00:48:06 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:06.971451 | orchestrator | 2025-09-08 00:48:06 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:06.973929 | orchestrator | 2025-09-08 00:48:06 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:06.975121 | orchestrator | 2025-09-08 00:48:06 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:06.975360 | orchestrator | 2025-09-08 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:10.045837 | orchestrator | 2025-09-08 00:48:10 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:10.047891 | orchestrator | 2025-09-08 00:48:10 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:10.048995 | orchestrator | 2025-09-08 00:48:10 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:10.052343 | orchestrator | 2025-09-08 00:48:10 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:10.052368 | orchestrator | 2025-09-08 00:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:13.094795 | orchestrator | 2025-09-08 00:48:13 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:13.094906 | orchestrator | 2025-09-08 00:48:13 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:13.095925 | orchestrator | 2025-09-08 00:48:13 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:13.096373 | orchestrator | 2025-09-08 00:48:13 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:13.097040 | orchestrator | 2025-09-08 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:16.138729 | orchestrator | 2025-09-08 00:48:16 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:16.139308 | orchestrator | 2025-09-08 00:48:16 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:16.140295 | orchestrator | 2025-09-08 00:48:16 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:16.141033 | orchestrator | 2025-09-08 00:48:16 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:16.141055 | orchestrator | 2025-09-08 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:19.194994 | orchestrator | 2025-09-08 00:48:19 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:19.196359 | orchestrator | 2025-09-08 00:48:19 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:19.199365 | orchestrator | 2025-09-08 00:48:19 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:19.200764 | orchestrator | 2025-09-08 00:48:19 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:19.201005 | orchestrator | 2025-09-08 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:22.246858 | orchestrator | 2025-09-08 00:48:22 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:22.247086 | orchestrator | 2025-09-08 00:48:22 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:22.248106 | orchestrator | 2025-09-08 00:48:22 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:22.248988 | orchestrator | 2025-09-08 00:48:22 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:22.249013 | orchestrator | 2025-09-08 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:25.300855 | orchestrator | 2025-09-08 00:48:25 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:25.302083 | orchestrator | 2025-09-08 00:48:25 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:25.304012 | orchestrator | 2025-09-08 00:48:25 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:25.306652 | orchestrator | 2025-09-08 00:48:25 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:25.307226 | orchestrator | 2025-09-08 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:28.357880 | orchestrator | 2025-09-08 00:48:28 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:28.360459 | orchestrator | 2025-09-08 00:48:28 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:28.363993 | orchestrator | 2025-09-08 00:48:28 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:28.366683 | orchestrator | 2025-09-08 00:48:28 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:28.367036 | orchestrator | 2025-09-08 00:48:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:31.404397 | orchestrator | 2025-09-08 00:48:31 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:31.404892 | orchestrator | 2025-09-08 00:48:31 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:31.405846 | orchestrator | 2025-09-08 00:48:31 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:31.406826 | orchestrator | 2025-09-08 00:48:31 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:31.406881 | orchestrator | 2025-09-08 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:34.452105 | orchestrator | 2025-09-08 00:48:34 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:34.455865 | orchestrator | 2025-09-08 00:48:34 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:34.460405 | orchestrator | 2025-09-08 00:48:34 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:34.461096 | orchestrator | 2025-09-08 00:48:34 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:34.461162 | orchestrator | 2025-09-08 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:37.503133 | orchestrator | 2025-09-08 00:48:37 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:37.503774 | orchestrator | 2025-09-08 00:48:37 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:37.506866 | orchestrator | 2025-09-08 00:48:37 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:37.507784 | orchestrator | 2025-09-08 00:48:37 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:37.508659 | orchestrator | 2025-09-08 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:40.539796 | orchestrator | 2025-09-08 00:48:40 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:40.540491 | orchestrator | 2025-09-08 00:48:40 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:40.542322 | orchestrator | 2025-09-08 00:48:40 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:40.543446 | orchestrator | 2025-09-08 00:48:40 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:40.543457 | orchestrator | 2025-09-08 00:48:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:43.586135 | orchestrator | 2025-09-08 00:48:43 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:43.587088 | orchestrator | 2025-09-08 00:48:43 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:43.587901 | orchestrator | 2025-09-08 00:48:43 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:43.588994 | orchestrator | 2025-09-08 00:48:43 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:43.589026 | orchestrator | 2025-09-08 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:46.625000 | orchestrator | 2025-09-08 00:48:46 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:46.626338 | orchestrator | 2025-09-08 00:48:46 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:46.627748 | orchestrator | 2025-09-08 00:48:46 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:46.628920 | orchestrator | 2025-09-08 00:48:46 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:46.629193 | orchestrator | 2025-09-08 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:49.659545 | orchestrator | 2025-09-08 00:48:49 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:49.661026 | orchestrator | 2025-09-08 00:48:49 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:49.661800 | orchestrator | 2025-09-08 00:48:49 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:49.663405 | orchestrator | 2025-09-08 00:48:49 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:49.663434 | orchestrator | 2025-09-08 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:52.698525 | orchestrator | 2025-09-08 00:48:52 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:52.699301 | orchestrator | 2025-09-08 00:48:52 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:52.700496 | orchestrator | 2025-09-08 00:48:52 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:52.701989 | orchestrator | 2025-09-08 00:48:52 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:52.702807 | orchestrator | 2025-09-08 00:48:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:55.737369 | orchestrator | 2025-09-08 00:48:55 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:55.737662 | orchestrator | 2025-09-08 00:48:55 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:55.738840 | orchestrator | 2025-09-08 00:48:55 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:55.740131 | orchestrator | 2025-09-08 00:48:55 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:55.740154 | orchestrator | 2025-09-08 00:48:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:48:58.784063 | orchestrator | 2025-09-08 00:48:58 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:48:58.786148 | orchestrator | 2025-09-08 00:48:58 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:48:58.787187 | orchestrator | 2025-09-08 00:48:58 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:48:58.788560 | orchestrator | 2025-09-08 00:48:58 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:48:58.788590 | orchestrator | 2025-09-08 00:48:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:01.840495 | orchestrator | 2025-09-08 00:49:01 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state STARTED 2025-09-08 00:49:01.841592 | orchestrator | 2025-09-08 00:49:01 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:01.843189 | orchestrator | 2025-09-08 00:49:01 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:01.843473 | orchestrator | 2025-09-08 00:49:01 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:01.843768 | orchestrator | 2025-09-08 00:49:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:04.896302 | orchestrator | 2025-09-08 00:49:04 | INFO  | Task ee34e5de-5285-4d1f-b2b2-cfb97206918c is in state SUCCESS 2025-09-08 00:49:04.896419 | orchestrator | 2025-09-08 00:49:04 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:04.897697 | orchestrator | 2025-09-08 00:49:04.897735 | orchestrator | 2025-09-08 00:49:04.897748 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-08 00:49:04.897760 | orchestrator | 2025-09-08 00:49:04.897771 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-08 00:49:04.897782 | orchestrator | Monday 08 September 2025 00:47:26 +0000 (0:00:00.183) 0:00:00.183 ****** 2025-09-08 00:49:04.897794 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-08 00:49:04.897827 | orchestrator | 2025-09-08 00:49:04.897839 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-08 00:49:04.897850 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.786) 0:00:00.970 ****** 2025-09-08 00:49:04.897861 | orchestrator | changed: [testbed-manager] 2025-09-08 00:49:04.897872 | orchestrator | 2025-09-08 00:49:04.897883 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-08 00:49:04.897901 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:01.134) 0:00:02.105 ****** 2025-09-08 00:49:04.897912 | orchestrator | changed: [testbed-manager] 2025-09-08 00:49:04.897922 | orchestrator | 2025-09-08 00:49:04.897933 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:49:04.897944 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:49:04.897956 | orchestrator | 2025-09-08 00:49:04.897966 | orchestrator | 2025-09-08 00:49:04.897977 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:49:04.897987 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:00.428) 0:00:02.533 ****** 2025-09-08 00:49:04.897997 | orchestrator | =============================================================================== 2025-09-08 00:49:04.898007 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.13s 2025-09-08 00:49:04.898129 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2025-09-08 00:49:04.898141 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2025-09-08 00:49:04.898151 | orchestrator | 2025-09-08 00:49:04.898162 | orchestrator | 2025-09-08 00:49:04.898173 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-08 00:49:04.898184 | orchestrator | 2025-09-08 00:49:04.898195 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-08 00:49:04.898206 | orchestrator | Monday 08 September 2025 00:47:26 +0000 (0:00:00.208) 0:00:00.208 ****** 2025-09-08 00:49:04.898217 | orchestrator | ok: [testbed-manager] 2025-09-08 00:49:04.898229 | orchestrator | 2025-09-08 00:49:04.898240 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-08 00:49:04.898252 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.571) 0:00:00.779 ****** 2025-09-08 00:49:04.898265 | orchestrator | ok: [testbed-manager] 2025-09-08 00:49:04.898277 | orchestrator | 2025-09-08 00:49:04.898290 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-08 00:49:04.898303 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.523) 0:00:01.303 ****** 2025-09-08 00:49:04.898316 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-08 00:49:04.898329 | orchestrator | 2025-09-08 00:49:04.898342 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-08 00:49:04.898353 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:00.662) 0:00:01.966 ****** 2025-09-08 00:49:04.898366 | orchestrator | changed: [testbed-manager] 2025-09-08 00:49:04.898378 | orchestrator | 2025-09-08 00:49:04.898391 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-08 00:49:04.898403 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:01.215) 0:00:03.182 ****** 2025-09-08 00:49:04.898416 | orchestrator | changed: [testbed-manager] 2025-09-08 00:49:04.898428 | orchestrator | 2025-09-08 00:49:04.898441 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-08 00:49:04.898453 | orchestrator | Monday 08 September 2025 00:47:30 +0000 (0:00:00.819) 0:00:04.001 ****** 2025-09-08 00:49:04.898466 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:49:04.898478 | orchestrator | 2025-09-08 00:49:04.898491 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-08 00:49:04.898503 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:01.574) 0:00:05.575 ****** 2025-09-08 00:49:04.898516 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:49:04.898535 | orchestrator | 2025-09-08 00:49:04.898548 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-08 00:49:04.898561 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:00.733) 0:00:06.308 ****** 2025-09-08 00:49:04.898573 | orchestrator | ok: [testbed-manager] 2025-09-08 00:49:04.898586 | orchestrator | 2025-09-08 00:49:04.898629 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-08 00:49:04.898639 | orchestrator | Monday 08 September 2025 00:47:33 +0000 (0:00:00.381) 0:00:06.689 ****** 2025-09-08 00:49:04.898649 | orchestrator | ok: [testbed-manager] 2025-09-08 00:49:04.898658 | orchestrator | 2025-09-08 00:49:04.898668 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:49:04.898678 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:49:04.898688 | orchestrator | 2025-09-08 00:49:04.898698 | orchestrator | 2025-09-08 00:49:04.898707 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:49:04.898717 | orchestrator | Monday 08 September 2025 00:47:33 +0000 (0:00:00.278) 0:00:06.967 ****** 2025-09-08 00:49:04.898726 | orchestrator | =============================================================================== 2025-09-08 00:49:04.898736 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2025-09-08 00:49:04.898746 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.22s 2025-09-08 00:49:04.898755 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.82s 2025-09-08 00:49:04.898779 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.73s 2025-09-08 00:49:04.898789 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2025-09-08 00:49:04.898799 | orchestrator | Get home directory of operator user ------------------------------------- 0.57s 2025-09-08 00:49:04.898808 | orchestrator | Create .kube directory -------------------------------------------------- 0.52s 2025-09-08 00:49:04.898818 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2025-09-08 00:49:04.898828 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-09-08 00:49:04.898837 | orchestrator | 2025-09-08 00:49:04.898847 | orchestrator | 2025-09-08 00:49:04.898856 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-08 00:49:04.898866 | orchestrator | 2025-09-08 00:49:04.898881 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-08 00:49:04.898891 | orchestrator | Monday 08 September 2025 00:46:34 +0000 (0:00:00.164) 0:00:00.164 ****** 2025-09-08 00:49:04.898901 | orchestrator | ok: [localhost] => { 2025-09-08 00:49:04.898911 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-08 00:49:04.898922 | orchestrator | } 2025-09-08 00:49:04.898932 | orchestrator | 2025-09-08 00:49:04.898942 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-08 00:49:04.898951 | orchestrator | Monday 08 September 2025 00:46:34 +0000 (0:00:00.099) 0:00:00.264 ****** 2025-09-08 00:49:04.898962 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-08 00:49:04.898974 | orchestrator | ...ignoring 2025-09-08 00:49:04.898984 | orchestrator | 2025-09-08 00:49:04.898994 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-08 00:49:04.899003 | orchestrator | Monday 08 September 2025 00:46:37 +0000 (0:00:03.078) 0:00:03.342 ****** 2025-09-08 00:49:04.899013 | orchestrator | skipping: [localhost] 2025-09-08 00:49:04.899022 | orchestrator | 2025-09-08 00:49:04.899032 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-08 00:49:04.899042 | orchestrator | Monday 08 September 2025 00:46:37 +0000 (0:00:00.073) 0:00:03.415 ****** 2025-09-08 00:49:04.899059 | orchestrator | ok: [localhost] 2025-09-08 00:49:04.899068 | orchestrator | 2025-09-08 00:49:04.899078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:49:04.899088 | orchestrator | 2025-09-08 00:49:04.899097 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:49:04.899107 | orchestrator | Monday 08 September 2025 00:46:37 +0000 (0:00:00.215) 0:00:03.631 ****** 2025-09-08 00:49:04.899117 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:04.899126 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:04.899136 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:04.899145 | orchestrator | 2025-09-08 00:49:04.899155 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:49:04.899164 | orchestrator | Monday 08 September 2025 00:46:38 +0000 (0:00:00.347) 0:00:03.978 ****** 2025-09-08 00:49:04.899174 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-08 00:49:04.899185 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-08 00:49:04.899194 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-08 00:49:04.899204 | orchestrator | 2025-09-08 00:49:04.899213 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-08 00:49:04.899223 | orchestrator | 2025-09-08 00:49:04.899233 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-08 00:49:04.899242 | orchestrator | Monday 08 September 2025 00:46:38 +0000 (0:00:00.540) 0:00:04.519 ****** 2025-09-08 00:49:04.899252 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:04.899262 | orchestrator | 2025-09-08 00:49:04.899272 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-08 00:49:04.899281 | orchestrator | Monday 08 September 2025 00:46:39 +0000 (0:00:00.598) 0:00:05.117 ****** 2025-09-08 00:49:04.899291 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:04.899300 | orchestrator | 2025-09-08 00:49:04.899310 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-08 00:49:04.899320 | orchestrator | Monday 08 September 2025 00:46:40 +0000 (0:00:01.406) 0:00:06.524 ****** 2025-09-08 00:49:04.899329 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.899339 | orchestrator | 2025-09-08 00:49:04.899349 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-08 00:49:04.899358 | orchestrator | Monday 08 September 2025 00:46:41 +0000 (0:00:00.414) 0:00:06.938 ****** 2025-09-08 00:49:04.899368 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.899377 | orchestrator | 2025-09-08 00:49:04.899387 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-08 00:49:04.899396 | orchestrator | Monday 08 September 2025 00:46:41 +0000 (0:00:00.544) 0:00:07.482 ****** 2025-09-08 00:49:04.899406 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.899415 | orchestrator | 2025-09-08 00:49:04.899425 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-08 00:49:04.899435 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:00.402) 0:00:07.885 ****** 2025-09-08 00:49:04.899444 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.899454 | orchestrator | 2025-09-08 00:49:04.899463 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-08 00:49:04.899473 | orchestrator | Monday 08 September 2025 00:46:43 +0000 (0:00:01.029) 0:00:08.914 ****** 2025-09-08 00:49:04.899483 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:04.899493 | orchestrator | 2025-09-08 00:49:04.899502 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-08 00:49:04.899517 | orchestrator | Monday 08 September 2025 00:46:46 +0000 (0:00:03.835) 0:00:12.750 ****** 2025-09-08 00:49:04.899527 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:04.899537 | orchestrator | 2025-09-08 00:49:04.899547 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-08 00:49:04.899562 | orchestrator | Monday 08 September 2025 00:46:47 +0000 (0:00:00.990) 0:00:13.740 ****** 2025-09-08 00:49:04.899572 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.899582 | orchestrator | 2025-09-08 00:49:04.899591 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-08 00:49:04.899630 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:00.468) 0:00:14.209 ****** 2025-09-08 00:49:04.899640 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.899650 | orchestrator | 2025-09-08 00:49:04.899659 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-08 00:49:04.899673 | orchestrator | Monday 08 September 2025 00:46:49 +0000 (0:00:00.795) 0:00:15.005 ****** 2025-09-08 00:49:04.899689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.899704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.899716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.899727 | orchestrator | 2025-09-08 00:49:04.899743 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-08 00:49:04.899753 | orchestrator | Monday 08 September 2025 00:46:50 +0000 (0:00:01.246) 0:00:16.251 ****** 2025-09-08 00:49:04.899775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.899787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.899799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.899809 | orchestrator | 2025-09-08 00:49:04.899819 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-08 00:49:04.899829 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:02.364) 0:00:18.615 ****** 2025-09-08 00:49:04.899839 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-08 00:49:04.899849 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-08 00:49:04.899858 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-08 00:49:04.899873 | orchestrator | 2025-09-08 00:49:04.899883 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-08 00:49:04.899893 | orchestrator | Monday 08 September 2025 00:46:54 +0000 (0:00:01.780) 0:00:20.396 ****** 2025-09-08 00:49:04.899902 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-08 00:49:04.899912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-08 00:49:04.899922 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-08 00:49:04.899931 | orchestrator | 2025-09-08 00:49:04.899946 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-08 00:49:04.899956 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:02.976) 0:00:23.372 ****** 2025-09-08 00:49:04.899966 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-08 00:49:04.899975 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-08 00:49:04.899985 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-08 00:49:04.899995 | orchestrator | 2025-09-08 00:49:04.900004 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-08 00:49:04.900014 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:01.877) 0:00:25.249 ****** 2025-09-08 00:49:04.900027 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-08 00:49:04.900037 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-08 00:49:04.900047 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-08 00:49:04.900056 | orchestrator | 2025-09-08 00:49:04.900066 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-08 00:49:04.900076 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:01.756) 0:00:27.006 ****** 2025-09-08 00:49:04.900085 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-08 00:49:04.900095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-08 00:49:04.900105 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-08 00:49:04.900114 | orchestrator | 2025-09-08 00:49:04.900124 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-08 00:49:04.900134 | orchestrator | Monday 08 September 2025 00:47:02 +0000 (0:00:01.702) 0:00:28.709 ****** 2025-09-08 00:49:04.900143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-08 00:49:04.900153 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-08 00:49:04.900163 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-08 00:49:04.900172 | orchestrator | 2025-09-08 00:49:04.900182 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-08 00:49:04.900192 | orchestrator | Monday 08 September 2025 00:47:04 +0000 (0:00:01.951) 0:00:30.661 ****** 2025-09-08 00:49:04.900202 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.900211 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:04.900221 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:04.900231 | orchestrator | 2025-09-08 00:49:04.900240 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-08 00:49:04.900250 | orchestrator | Monday 08 September 2025 00:47:05 +0000 (0:00:00.901) 0:00:31.562 ****** 2025-09-08 00:49:04.900261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.900286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.900302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:49:04.900314 | orchestrator | 2025-09-08 00:49:04.900323 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-08 00:49:04.900333 | orchestrator | Monday 08 September 2025 00:47:07 +0000 (0:00:02.174) 0:00:33.737 ****** 2025-09-08 00:49:04.900342 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:04.900352 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:04.900362 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:04.900371 | orchestrator | 2025-09-08 00:49:04.900381 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-08 00:49:04.900390 | orchestrator | Monday 08 September 2025 00:47:10 +0000 (0:00:02.376) 0:00:36.113 ****** 2025-09-08 00:49:04.900400 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:04.900409 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:04.900419 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:04.900429 | orchestrator | 2025-09-08 00:49:04.900438 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-08 00:49:04.900455 | orchestrator | Monday 08 September 2025 00:47:18 +0000 (0:00:08.473) 0:00:44.586 ****** 2025-09-08 00:49:04.900465 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:04.900475 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:04.900484 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:04.900494 | orchestrator | 2025-09-08 00:49:04.900503 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-08 00:49:04.900513 | orchestrator | 2025-09-08 00:49:04.900522 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-08 00:49:04.900532 | orchestrator | Monday 08 September 2025 00:47:19 +0000 (0:00:00.593) 0:00:45.180 ****** 2025-09-08 00:49:04.900542 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:04.900551 | orchestrator | 2025-09-08 00:49:04.900561 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-08 00:49:04.900570 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:00.706) 0:00:45.887 ****** 2025-09-08 00:49:04.900580 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:04.900589 | orchestrator | 2025-09-08 00:49:04.900648 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-08 00:49:04.900658 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:00.247) 0:00:46.134 ****** 2025-09-08 00:49:04.900667 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:04.900677 | orchestrator | 2025-09-08 00:49:04.900687 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-08 00:49:04.900696 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:06.788) 0:00:52.922 ****** 2025-09-08 00:49:04.900706 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:04.900716 | orchestrator | 2025-09-08 00:49:04.900725 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-08 00:49:04.900735 | orchestrator | 2025-09-08 00:49:04.900744 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-08 00:49:04.900754 | orchestrator | Monday 08 September 2025 00:48:18 +0000 (0:00:51.368) 0:01:44.291 ****** 2025-09-08 00:49:04.900764 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:04.900773 | orchestrator | 2025-09-08 00:49:04.900783 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-08 00:49:04.900793 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.745) 0:01:45.036 ****** 2025-09-08 00:49:04.900802 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:04.900812 | orchestrator | 2025-09-08 00:49:04.900822 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-08 00:49:04.900831 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:00.528) 0:01:45.565 ****** 2025-09-08 00:49:04.900841 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:04.900850 | orchestrator | 2025-09-08 00:49:04.900860 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-08 00:49:04.900870 | orchestrator | Monday 08 September 2025 00:48:21 +0000 (0:00:01.747) 0:01:47.313 ****** 2025-09-08 00:49:04.900879 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:04.900889 | orchestrator | 2025-09-08 00:49:04.900898 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-08 00:49:04.900908 | orchestrator | 2025-09-08 00:49:04.900918 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-08 00:49:04.900933 | orchestrator | Monday 08 September 2025 00:48:37 +0000 (0:00:16.245) 0:02:03.559 ****** 2025-09-08 00:49:04.900943 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:04.900953 | orchestrator | 2025-09-08 00:49:04.900963 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-08 00:49:04.900973 | orchestrator | Monday 08 September 2025 00:48:38 +0000 (0:00:00.686) 0:02:04.245 ****** 2025-09-08 00:49:04.900982 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:04.900992 | orchestrator | 2025-09-08 00:49:04.901002 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-08 00:49:04.901023 | orchestrator | Monday 08 September 2025 00:48:38 +0000 (0:00:00.254) 0:02:04.499 ****** 2025-09-08 00:49:04.901032 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:04.901042 | orchestrator | 2025-09-08 00:49:04.901097 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-08 00:49:04.901115 | orchestrator | Monday 08 September 2025 00:48:45 +0000 (0:00:06.677) 0:02:11.177 ****** 2025-09-08 00:49:04.901125 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:04.901135 | orchestrator | 2025-09-08 00:49:04.901144 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-08 00:49:04.901154 | orchestrator | 2025-09-08 00:49:04.901164 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-08 00:49:04.901174 | orchestrator | Monday 08 September 2025 00:48:58 +0000 (0:00:13.388) 0:02:24.566 ****** 2025-09-08 00:49:04.901183 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:04.901193 | orchestrator | 2025-09-08 00:49:04.901203 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-08 00:49:04.901212 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.785) 0:02:25.351 ****** 2025-09-08 00:49:04.901222 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-08 00:49:04.901231 | orchestrator | enable_outward_rabbitmq_True 2025-09-08 00:49:04.901241 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-08 00:49:04.901250 | orchestrator | outward_rabbitmq_restart 2025-09-08 00:49:04.901260 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:04.901270 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:04.901279 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:04.901289 | orchestrator | 2025-09-08 00:49:04.901299 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-08 00:49:04.901308 | orchestrator | skipping: no hosts matched 2025-09-08 00:49:04.901318 | orchestrator | 2025-09-08 00:49:04.901327 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-08 00:49:04.901337 | orchestrator | skipping: no hosts matched 2025-09-08 00:49:04.901347 | orchestrator | 2025-09-08 00:49:04.901356 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-08 00:49:04.901366 | orchestrator | skipping: no hosts matched 2025-09-08 00:49:04.901375 | orchestrator | 2025-09-08 00:49:04.901385 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:49:04.901395 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-08 00:49:04.901405 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 00:49:04.901415 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:49:04.901425 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:49:04.901435 | orchestrator | 2025-09-08 00:49:04.901444 | orchestrator | 2025-09-08 00:49:04.901454 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:49:04.901463 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:02.624) 0:02:27.976 ****** 2025-09-08 00:49:04.901473 | orchestrator | =============================================================================== 2025-09-08 00:49:04.901483 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.00s 2025-09-08 00:49:04.901492 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.21s 2025-09-08 00:49:04.901502 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.47s 2025-09-08 00:49:04.901512 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.84s 2025-09-08 00:49:04.901529 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.08s 2025-09-08 00:49:04.901538 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.98s 2025-09-08 00:49:04.901548 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.62s 2025-09-08 00:49:04.901558 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 2.38s 2025-09-08 00:49:04.901567 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.36s 2025-09-08 00:49:04.901577 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.17s 2025-09-08 00:49:04.901586 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.14s 2025-09-08 00:49:04.901638 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.95s 2025-09-08 00:49:04.901649 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.88s 2025-09-08 00:49:04.901658 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.78s 2025-09-08 00:49:04.901668 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.76s 2025-09-08 00:49:04.901684 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.70s 2025-09-08 00:49:04.901695 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.41s 2025-09-08 00:49:04.901705 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.25s 2025-09-08 00:49:04.901714 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.03s 2025-09-08 00:49:04.901724 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.03s 2025-09-08 00:49:04.901734 | orchestrator | 2025-09-08 00:49:04 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:04.901823 | orchestrator | 2025-09-08 00:49:04 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:04.901838 | orchestrator | 2025-09-08 00:49:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:07.950582 | orchestrator | 2025-09-08 00:49:07 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:07.952435 | orchestrator | 2025-09-08 00:49:07 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:07.955242 | orchestrator | 2025-09-08 00:49:07 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:07.955320 | orchestrator | 2025-09-08 00:49:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:10.992949 | orchestrator | 2025-09-08 00:49:10 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:10.993781 | orchestrator | 2025-09-08 00:49:10 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:10.995931 | orchestrator | 2025-09-08 00:49:10 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:10.996024 | orchestrator | 2025-09-08 00:49:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:14.042691 | orchestrator | 2025-09-08 00:49:14 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:14.045272 | orchestrator | 2025-09-08 00:49:14 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:14.049358 | orchestrator | 2025-09-08 00:49:14 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:14.049582 | orchestrator | 2025-09-08 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:17.105729 | orchestrator | 2025-09-08 00:49:17 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:17.106666 | orchestrator | 2025-09-08 00:49:17 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:17.108423 | orchestrator | 2025-09-08 00:49:17 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:17.108680 | orchestrator | 2025-09-08 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:20.158314 | orchestrator | 2025-09-08 00:49:20 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:20.159654 | orchestrator | 2025-09-08 00:49:20 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:20.162233 | orchestrator | 2025-09-08 00:49:20 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:20.162256 | orchestrator | 2025-09-08 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:23.202538 | orchestrator | 2025-09-08 00:49:23 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:23.202718 | orchestrator | 2025-09-08 00:49:23 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:23.202734 | orchestrator | 2025-09-08 00:49:23 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:23.202747 | orchestrator | 2025-09-08 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:26.252220 | orchestrator | 2025-09-08 00:49:26 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:26.254260 | orchestrator | 2025-09-08 00:49:26 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:26.257106 | orchestrator | 2025-09-08 00:49:26 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:26.257684 | orchestrator | 2025-09-08 00:49:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:29.344721 | orchestrator | 2025-09-08 00:49:29 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:29.346621 | orchestrator | 2025-09-08 00:49:29 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:29.348799 | orchestrator | 2025-09-08 00:49:29 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:29.348841 | orchestrator | 2025-09-08 00:49:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:32.391070 | orchestrator | 2025-09-08 00:49:32 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:32.392168 | orchestrator | 2025-09-08 00:49:32 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:32.392198 | orchestrator | 2025-09-08 00:49:32 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:32.392461 | orchestrator | 2025-09-08 00:49:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:35.436509 | orchestrator | 2025-09-08 00:49:35 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:35.437858 | orchestrator | 2025-09-08 00:49:35 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:35.439661 | orchestrator | 2025-09-08 00:49:35 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:35.439686 | orchestrator | 2025-09-08 00:49:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:38.489151 | orchestrator | 2025-09-08 00:49:38 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:38.491145 | orchestrator | 2025-09-08 00:49:38 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:38.492755 | orchestrator | 2025-09-08 00:49:38 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:38.492878 | orchestrator | 2025-09-08 00:49:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:41.532063 | orchestrator | 2025-09-08 00:49:41 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:41.532176 | orchestrator | 2025-09-08 00:49:41 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:41.532919 | orchestrator | 2025-09-08 00:49:41 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:41.532943 | orchestrator | 2025-09-08 00:49:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:44.573210 | orchestrator | 2025-09-08 00:49:44 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:44.575643 | orchestrator | 2025-09-08 00:49:44 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:44.578103 | orchestrator | 2025-09-08 00:49:44 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:44.578430 | orchestrator | 2025-09-08 00:49:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:47.623671 | orchestrator | 2025-09-08 00:49:47 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:47.626694 | orchestrator | 2025-09-08 00:49:47 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:47.631024 | orchestrator | 2025-09-08 00:49:47 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:47.631431 | orchestrator | 2025-09-08 00:49:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:50.684653 | orchestrator | 2025-09-08 00:49:50 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:50.687084 | orchestrator | 2025-09-08 00:49:50 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:50.690162 | orchestrator | 2025-09-08 00:49:50 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:50.690492 | orchestrator | 2025-09-08 00:49:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:53.731393 | orchestrator | 2025-09-08 00:49:53 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:53.732377 | orchestrator | 2025-09-08 00:49:53 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:53.734454 | orchestrator | 2025-09-08 00:49:53 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:53.734480 | orchestrator | 2025-09-08 00:49:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:56.774625 | orchestrator | 2025-09-08 00:49:56 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:56.776115 | orchestrator | 2025-09-08 00:49:56 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state STARTED 2025-09-08 00:49:56.777614 | orchestrator | 2025-09-08 00:49:56 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:56.777827 | orchestrator | 2025-09-08 00:49:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:49:59.824301 | orchestrator | 2025-09-08 00:49:59 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:49:59.825555 | orchestrator | 2025-09-08 00:49:59 | INFO  | Task 52e8067d-164c-47d1-b92c-7498e7e293e1 is in state SUCCESS 2025-09-08 00:49:59.827729 | orchestrator | 2025-09-08 00:49:59.827796 | orchestrator | 2025-09-08 00:49:59.827810 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:49:59.827847 | orchestrator | 2025-09-08 00:49:59.827924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:49:59.827939 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.237) 0:00:00.237 ****** 2025-09-08 00:49:59.827951 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:49:59.827964 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:49:59.827976 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:49:59.827987 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.827998 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.828009 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.828020 | orchestrator | 2025-09-08 00:49:59.828030 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:49:59.828042 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:00.745) 0:00:00.982 ****** 2025-09-08 00:49:59.828053 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-08 00:49:59.828064 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-08 00:49:59.828075 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-08 00:49:59.828086 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-08 00:49:59.828097 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-08 00:49:59.828107 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-08 00:49:59.828118 | orchestrator | 2025-09-08 00:49:59.828129 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-08 00:49:59.828140 | orchestrator | 2025-09-08 00:49:59.828151 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-08 00:49:59.828161 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:01.219) 0:00:02.202 ****** 2025-09-08 00:49:59.828233 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:59.828248 | orchestrator | 2025-09-08 00:49:59.828259 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-08 00:49:59.828270 | orchestrator | Monday 08 September 2025 00:47:31 +0000 (0:00:01.699) 0:00:03.901 ****** 2025-09-08 00:49:59.828283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828387 | orchestrator | 2025-09-08 00:49:59.828406 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-08 00:49:59.828419 | orchestrator | Monday 08 September 2025 00:47:33 +0000 (0:00:02.149) 0:00:06.051 ****** 2025-09-08 00:49:59.828432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828534 | orchestrator | 2025-09-08 00:49:59.828553 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-08 00:49:59.828599 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:01.648) 0:00:07.699 ****** 2025-09-08 00:49:59.828621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828713 | orchestrator | 2025-09-08 00:49:59.828724 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-08 00:49:59.828735 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:01.250) 0:00:08.949 ****** 2025-09-08 00:49:59.828747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828837 | orchestrator | 2025-09-08 00:49:59.828848 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-08 00:49:59.828859 | orchestrator | Monday 08 September 2025 00:47:38 +0000 (0:00:01.660) 0:00:10.609 ****** 2025-09-08 00:49:59.828870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.828945 | orchestrator | 2025-09-08 00:49:59.828956 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-08 00:49:59.828966 | orchestrator | Monday 08 September 2025 00:47:39 +0000 (0:00:01.287) 0:00:11.897 ****** 2025-09-08 00:49:59.828978 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:49:59.828989 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:49:59.829000 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:49:59.829011 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.829022 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.829033 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.829044 | orchestrator | 2025-09-08 00:49:59.829054 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-08 00:49:59.829065 | orchestrator | Monday 08 September 2025 00:47:42 +0000 (0:00:02.716) 0:00:14.613 ****** 2025-09-08 00:49:59.829076 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-08 00:49:59.829088 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-08 00:49:59.829099 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-08 00:49:59.829114 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-08 00:49:59.829131 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-08 00:49:59.829142 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-08 00:49:59.829153 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:49:59.829164 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:49:59.829175 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:49:59.829186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:49:59.829196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:49:59.829208 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:49:59.829220 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:49:59.829231 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-08 00:49:59.829242 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:49:59.829253 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:49:59.829274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:49:59.829285 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:49:59.829297 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:49:59.829308 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:49:59.829319 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-08 00:49:59.829330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:49:59.829340 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:49:59.829351 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:49:59.829362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:49:59.829373 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:49:59.829383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-08 00:49:59.829394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:49:59.829405 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:49:59.829416 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:49:59.829427 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:49:59.829438 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:49:59.829448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:49:59.829459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-08 00:49:59.829470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:49:59.829481 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-08 00:49:59.829492 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-08 00:49:59.829503 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-08 00:49:59.829514 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-08 00:49:59.829524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-08 00:49:59.829544 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-08 00:49:59.829569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-08 00:49:59.829613 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-08 00:49:59.829634 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-08 00:49:59.829654 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-08 00:49:59.829688 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-08 00:49:59.829700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-08 00:49:59.829711 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-08 00:49:59.829721 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-08 00:49:59.829732 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-08 00:49:59.829743 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-08 00:49:59.829754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-08 00:49:59.829765 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-08 00:49:59.829776 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-08 00:49:59.829786 | orchestrator | 2025-09-08 00:49:59.829797 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:49:59.829808 | orchestrator | Monday 08 September 2025 00:48:02 +0000 (0:00:20.496) 0:00:35.110 ****** 2025-09-08 00:49:59.829819 | orchestrator | 2025-09-08 00:49:59.829830 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:49:59.829841 | orchestrator | Monday 08 September 2025 00:48:02 +0000 (0:00:00.322) 0:00:35.433 ****** 2025-09-08 00:49:59.829852 | orchestrator | 2025-09-08 00:49:59.829863 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:49:59.829873 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:00.069) 0:00:35.502 ****** 2025-09-08 00:49:59.829884 | orchestrator | 2025-09-08 00:49:59.829895 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:49:59.829906 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:00.085) 0:00:35.588 ****** 2025-09-08 00:49:59.829916 | orchestrator | 2025-09-08 00:49:59.829927 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:49:59.829938 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:00.079) 0:00:35.668 ****** 2025-09-08 00:49:59.829948 | orchestrator | 2025-09-08 00:49:59.829959 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-08 00:49:59.829970 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:00.069) 0:00:35.737 ****** 2025-09-08 00:49:59.829981 | orchestrator | 2025-09-08 00:49:59.829992 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-08 00:49:59.830002 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:00.087) 0:00:35.825 ****** 2025-09-08 00:49:59.830013 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.830076 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:49:59.830088 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:49:59.830099 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:49:59.830110 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.830121 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.830132 | orchestrator | 2025-09-08 00:49:59.830143 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-08 00:49:59.830154 | orchestrator | Monday 08 September 2025 00:48:05 +0000 (0:00:01.756) 0:00:37.582 ****** 2025-09-08 00:49:59.830165 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.830176 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.830187 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:49:59.830205 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.830216 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:49:59.830227 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:49:59.830238 | orchestrator | 2025-09-08 00:49:59.830249 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-08 00:49:59.830260 | orchestrator | 2025-09-08 00:49:59.830271 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-08 00:49:59.830282 | orchestrator | Monday 08 September 2025 00:48:37 +0000 (0:00:32.097) 0:01:09.680 ****** 2025-09-08 00:49:59.830293 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:59.830304 | orchestrator | 2025-09-08 00:49:59.830315 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-08 00:49:59.830326 | orchestrator | Monday 08 September 2025 00:48:37 +0000 (0:00:00.683) 0:01:10.364 ****** 2025-09-08 00:49:59.830337 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:59.830348 | orchestrator | 2025-09-08 00:49:59.830367 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-08 00:49:59.830385 | orchestrator | Monday 08 September 2025 00:48:38 +0000 (0:00:00.547) 0:01:10.911 ****** 2025-09-08 00:49:59.830396 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.830407 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.830418 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.830429 | orchestrator | 2025-09-08 00:49:59.830440 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-08 00:49:59.830451 | orchestrator | Monday 08 September 2025 00:48:39 +0000 (0:00:01.021) 0:01:11.933 ****** 2025-09-08 00:49:59.830462 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.830473 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.830483 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.830494 | orchestrator | 2025-09-08 00:49:59.830505 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-08 00:49:59.830515 | orchestrator | Monday 08 September 2025 00:48:39 +0000 (0:00:00.357) 0:01:12.291 ****** 2025-09-08 00:49:59.830526 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.830537 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.830547 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.830558 | orchestrator | 2025-09-08 00:49:59.830569 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-08 00:49:59.830624 | orchestrator | Monday 08 September 2025 00:48:40 +0000 (0:00:00.332) 0:01:12.623 ****** 2025-09-08 00:49:59.830644 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.830664 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.830682 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.830701 | orchestrator | 2025-09-08 00:49:59.830713 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-08 00:49:59.830724 | orchestrator | Monday 08 September 2025 00:48:40 +0000 (0:00:00.327) 0:01:12.951 ****** 2025-09-08 00:49:59.830735 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.830746 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.830757 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.830768 | orchestrator | 2025-09-08 00:49:59.830779 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-08 00:49:59.830790 | orchestrator | Monday 08 September 2025 00:48:40 +0000 (0:00:00.547) 0:01:13.499 ****** 2025-09-08 00:49:59.830801 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.830812 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.830823 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.830834 | orchestrator | 2025-09-08 00:49:59.830845 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-08 00:49:59.830856 | orchestrator | Monday 08 September 2025 00:48:41 +0000 (0:00:00.313) 0:01:13.812 ****** 2025-09-08 00:49:59.830867 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.830887 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.830898 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.830909 | orchestrator | 2025-09-08 00:49:59.830920 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-08 00:49:59.830931 | orchestrator | Monday 08 September 2025 00:48:41 +0000 (0:00:00.303) 0:01:14.116 ****** 2025-09-08 00:49:59.830942 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.830953 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.830964 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.830975 | orchestrator | 2025-09-08 00:49:59.830986 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-08 00:49:59.830997 | orchestrator | Monday 08 September 2025 00:48:41 +0000 (0:00:00.313) 0:01:14.429 ****** 2025-09-08 00:49:59.831008 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831019 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831030 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831041 | orchestrator | 2025-09-08 00:49:59.831052 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-08 00:49:59.831063 | orchestrator | Monday 08 September 2025 00:48:42 +0000 (0:00:00.693) 0:01:15.123 ****** 2025-09-08 00:49:59.831074 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831085 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831096 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831107 | orchestrator | 2025-09-08 00:49:59.831118 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-08 00:49:59.831129 | orchestrator | Monday 08 September 2025 00:48:42 +0000 (0:00:00.362) 0:01:15.485 ****** 2025-09-08 00:49:59.831140 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831150 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831161 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831172 | orchestrator | 2025-09-08 00:49:59.831183 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-08 00:49:59.831194 | orchestrator | Monday 08 September 2025 00:48:43 +0000 (0:00:00.349) 0:01:15.835 ****** 2025-09-08 00:49:59.831205 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831216 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831227 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831238 | orchestrator | 2025-09-08 00:49:59.831249 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-08 00:49:59.831260 | orchestrator | Monday 08 September 2025 00:48:43 +0000 (0:00:00.315) 0:01:16.151 ****** 2025-09-08 00:49:59.831271 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831282 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831293 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831304 | orchestrator | 2025-09-08 00:49:59.831315 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-08 00:49:59.831326 | orchestrator | Monday 08 September 2025 00:48:44 +0000 (0:00:00.525) 0:01:16.677 ****** 2025-09-08 00:49:59.831337 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831348 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831359 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831369 | orchestrator | 2025-09-08 00:49:59.831380 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-08 00:49:59.831391 | orchestrator | Monday 08 September 2025 00:48:44 +0000 (0:00:00.309) 0:01:16.986 ****** 2025-09-08 00:49:59.831403 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831414 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831425 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831436 | orchestrator | 2025-09-08 00:49:59.831453 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-08 00:49:59.831464 | orchestrator | Monday 08 September 2025 00:48:44 +0000 (0:00:00.283) 0:01:17.270 ****** 2025-09-08 00:49:59.831480 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831492 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831509 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831520 | orchestrator | 2025-09-08 00:49:59.831531 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-08 00:49:59.831542 | orchestrator | Monday 08 September 2025 00:48:45 +0000 (0:00:00.307) 0:01:17.577 ****** 2025-09-08 00:49:59.831553 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831564 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831597 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831609 | orchestrator | 2025-09-08 00:49:59.831620 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-08 00:49:59.831631 | orchestrator | Monday 08 September 2025 00:48:45 +0000 (0:00:00.588) 0:01:18.166 ****** 2025-09-08 00:49:59.831647 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:49:59.831666 | orchestrator | 2025-09-08 00:49:59.831685 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-08 00:49:59.831705 | orchestrator | Monday 08 September 2025 00:48:46 +0000 (0:00:00.624) 0:01:18.790 ****** 2025-09-08 00:49:59.831725 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.831743 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.831758 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.831769 | orchestrator | 2025-09-08 00:49:59.831779 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-08 00:49:59.831790 | orchestrator | Monday 08 September 2025 00:48:46 +0000 (0:00:00.472) 0:01:19.263 ****** 2025-09-08 00:49:59.831801 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.831812 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.831822 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.831833 | orchestrator | 2025-09-08 00:49:59.831844 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-08 00:49:59.831855 | orchestrator | Monday 08 September 2025 00:48:47 +0000 (0:00:00.683) 0:01:19.946 ****** 2025-09-08 00:49:59.831865 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831876 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831887 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831898 | orchestrator | 2025-09-08 00:49:59.831909 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-08 00:49:59.831919 | orchestrator | Monday 08 September 2025 00:48:47 +0000 (0:00:00.443) 0:01:20.389 ****** 2025-09-08 00:49:59.831930 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.831941 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.831951 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.831962 | orchestrator | 2025-09-08 00:49:59.831973 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-08 00:49:59.831984 | orchestrator | Monday 08 September 2025 00:48:48 +0000 (0:00:00.350) 0:01:20.740 ****** 2025-09-08 00:49:59.831995 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.832005 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.832016 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.832027 | orchestrator | 2025-09-08 00:49:59.832038 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-08 00:49:59.832048 | orchestrator | Monday 08 September 2025 00:48:48 +0000 (0:00:00.357) 0:01:21.097 ****** 2025-09-08 00:49:59.832059 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.832070 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.832081 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.832092 | orchestrator | 2025-09-08 00:49:59.832102 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-08 00:49:59.832113 | orchestrator | Monday 08 September 2025 00:48:49 +0000 (0:00:00.560) 0:01:21.658 ****** 2025-09-08 00:49:59.832124 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.832135 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.832145 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.832156 | orchestrator | 2025-09-08 00:49:59.832175 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-08 00:49:59.832185 | orchestrator | Monday 08 September 2025 00:48:49 +0000 (0:00:00.310) 0:01:21.968 ****** 2025-09-08 00:49:59.832196 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.832207 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.832218 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.832229 | orchestrator | 2025-09-08 00:49:59.832239 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-08 00:49:59.832250 | orchestrator | Monday 08 September 2025 00:48:49 +0000 (0:00:00.303) 0:01:22.272 ****** 2025-09-08 00:49:59.832262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832390 | orchestrator | 2025-09-08 00:49:59.832401 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-08 00:49:59.832412 | orchestrator | Monday 08 September 2025 00:48:51 +0000 (0:00:01.620) 0:01:23.893 ****** 2025-09-08 00:49:59.832423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832541 | orchestrator | 2025-09-08 00:49:59.832552 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-08 00:49:59.832563 | orchestrator | Monday 08 September 2025 00:48:56 +0000 (0:00:05.509) 0:01:29.402 ****** 2025-09-08 00:49:59.832596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.832755 | orchestrator | 2025-09-08 00:49:59.832773 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:49:59.832787 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:02.428) 0:01:31.830 ****** 2025-09-08 00:49:59.832798 | orchestrator | 2025-09-08 00:49:59.832809 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:49:59.832820 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.107) 0:01:31.938 ****** 2025-09-08 00:49:59.832830 | orchestrator | 2025-09-08 00:49:59.832841 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:49:59.832852 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.077) 0:01:32.015 ****** 2025-09-08 00:49:59.832863 | orchestrator | 2025-09-08 00:49:59.832874 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-08 00:49:59.832884 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.070) 0:01:32.086 ****** 2025-09-08 00:49:59.832895 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.832906 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.832917 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.832928 | orchestrator | 2025-09-08 00:49:59.832939 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-08 00:49:59.832949 | orchestrator | Monday 08 September 2025 00:49:07 +0000 (0:00:07.437) 0:01:39.524 ****** 2025-09-08 00:49:59.832960 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.832971 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.832982 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.832993 | orchestrator | 2025-09-08 00:49:59.833004 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-08 00:49:59.833015 | orchestrator | Monday 08 September 2025 00:49:09 +0000 (0:00:02.728) 0:01:42.253 ****** 2025-09-08 00:49:59.833026 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.833037 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.833048 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.833058 | orchestrator | 2025-09-08 00:49:59.833069 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-08 00:49:59.833080 | orchestrator | Monday 08 September 2025 00:49:17 +0000 (0:00:07.831) 0:01:50.085 ****** 2025-09-08 00:49:59.833091 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.833102 | orchestrator | 2025-09-08 00:49:59.833112 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-08 00:49:59.833123 | orchestrator | Monday 08 September 2025 00:49:17 +0000 (0:00:00.137) 0:01:50.222 ****** 2025-09-08 00:49:59.833134 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.833145 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.833156 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.833167 | orchestrator | 2025-09-08 00:49:59.833185 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-08 00:49:59.833196 | orchestrator | Monday 08 September 2025 00:49:18 +0000 (0:00:00.842) 0:01:51.065 ****** 2025-09-08 00:49:59.833207 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.833223 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.833235 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.833245 | orchestrator | 2025-09-08 00:49:59.833256 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-08 00:49:59.833274 | orchestrator | Monday 08 September 2025 00:49:19 +0000 (0:00:00.671) 0:01:51.737 ****** 2025-09-08 00:49:59.833285 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.833296 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.833306 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.833317 | orchestrator | 2025-09-08 00:49:59.833328 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-08 00:49:59.833339 | orchestrator | Monday 08 September 2025 00:49:20 +0000 (0:00:01.098) 0:01:52.835 ****** 2025-09-08 00:49:59.833349 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.833360 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.833371 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.833382 | orchestrator | 2025-09-08 00:49:59.833392 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-08 00:49:59.833403 | orchestrator | Monday 08 September 2025 00:49:21 +0000 (0:00:00.714) 0:01:53.550 ****** 2025-09-08 00:49:59.833414 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.833425 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.833436 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.833446 | orchestrator | 2025-09-08 00:49:59.833457 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-08 00:49:59.833468 | orchestrator | Monday 08 September 2025 00:49:21 +0000 (0:00:00.746) 0:01:54.296 ****** 2025-09-08 00:49:59.833479 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.833489 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.833500 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.833511 | orchestrator | 2025-09-08 00:49:59.833522 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-08 00:49:59.833532 | orchestrator | Monday 08 September 2025 00:49:22 +0000 (0:00:00.822) 0:01:55.118 ****** 2025-09-08 00:49:59.833543 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.833554 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.833564 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.833596 | orchestrator | 2025-09-08 00:49:59.833609 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-08 00:49:59.833620 | orchestrator | Monday 08 September 2025 00:49:23 +0000 (0:00:00.496) 0:01:55.615 ****** 2025-09-08 00:49:59.833632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833643 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833655 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833666 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833678 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833700 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833727 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833748 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833768 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833787 | orchestrator | 2025-09-08 00:49:59.833807 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-08 00:49:59.833825 | orchestrator | Monday 08 September 2025 00:49:24 +0000 (0:00:01.449) 0:01:57.065 ****** 2025-09-08 00:49:59.833839 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833850 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833861 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833872 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.833962 | orchestrator | 2025-09-08 00:49:59.833973 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-08 00:49:59.833984 | orchestrator | Monday 08 September 2025 00:49:30 +0000 (0:00:05.921) 0:02:02.986 ****** 2025-09-08 00:49:59.833995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834006 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834044 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834076 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 00:49:59.834142 | orchestrator | 2025-09-08 00:49:59.834152 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:49:59.834163 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:02.875) 0:02:05.862 ****** 2025-09-08 00:49:59.834174 | orchestrator | 2025-09-08 00:49:59.834185 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:49:59.834196 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:00.077) 0:02:05.940 ****** 2025-09-08 00:49:59.834206 | orchestrator | 2025-09-08 00:49:59.834217 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-08 00:49:59.834228 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:00.289) 0:02:06.229 ****** 2025-09-08 00:49:59.834238 | orchestrator | 2025-09-08 00:49:59.834249 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-08 00:49:59.834260 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:00.084) 0:02:06.314 ****** 2025-09-08 00:49:59.834271 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.834348 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.834369 | orchestrator | 2025-09-08 00:49:59.834380 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-08 00:49:59.834391 | orchestrator | Monday 08 September 2025 00:49:40 +0000 (0:00:06.202) 0:02:12.516 ****** 2025-09-08 00:49:59.834402 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.834412 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.834423 | orchestrator | 2025-09-08 00:49:59.834434 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-08 00:49:59.834444 | orchestrator | Monday 08 September 2025 00:49:46 +0000 (0:00:06.543) 0:02:19.060 ****** 2025-09-08 00:49:59.834455 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:49:59.834473 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:49:59.834484 | orchestrator | 2025-09-08 00:49:59.834495 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-08 00:49:59.834506 | orchestrator | Monday 08 September 2025 00:49:52 +0000 (0:00:06.216) 0:02:25.277 ****** 2025-09-08 00:49:59.834517 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:49:59.834527 | orchestrator | 2025-09-08 00:49:59.834538 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-08 00:49:59.834549 | orchestrator | Monday 08 September 2025 00:49:52 +0000 (0:00:00.125) 0:02:25.402 ****** 2025-09-08 00:49:59.834560 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.834571 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.834734 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.834753 | orchestrator | 2025-09-08 00:49:59.834782 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-08 00:49:59.834793 | orchestrator | Monday 08 September 2025 00:49:53 +0000 (0:00:00.789) 0:02:26.192 ****** 2025-09-08 00:49:59.834804 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.834815 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.834826 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.834837 | orchestrator | 2025-09-08 00:49:59.834848 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-08 00:49:59.834859 | orchestrator | Monday 08 September 2025 00:49:54 +0000 (0:00:00.624) 0:02:26.817 ****** 2025-09-08 00:49:59.834870 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.834881 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.834892 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.834903 | orchestrator | 2025-09-08 00:49:59.834914 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-08 00:49:59.834925 | orchestrator | Monday 08 September 2025 00:49:55 +0000 (0:00:00.730) 0:02:27.548 ****** 2025-09-08 00:49:59.834933 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:49:59.834941 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:49:59.834949 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:49:59.834957 | orchestrator | 2025-09-08 00:49:59.834965 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-08 00:49:59.834973 | orchestrator | Monday 08 September 2025 00:49:55 +0000 (0:00:00.619) 0:02:28.168 ****** 2025-09-08 00:49:59.834981 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.834989 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.834997 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.835005 | orchestrator | 2025-09-08 00:49:59.835013 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-08 00:49:59.835020 | orchestrator | Monday 08 September 2025 00:49:56 +0000 (0:00:00.781) 0:02:28.950 ****** 2025-09-08 00:49:59.835029 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:49:59.835037 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:49:59.835044 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:49:59.835052 | orchestrator | 2025-09-08 00:49:59.835060 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:49:59.835069 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-08 00:49:59.835077 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-08 00:49:59.835095 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-08 00:49:59.835110 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:49:59.835119 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:49:59.835135 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:49:59.835143 | orchestrator | 2025-09-08 00:49:59.835151 | orchestrator | 2025-09-08 00:49:59.835159 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:49:59.835167 | orchestrator | Monday 08 September 2025 00:49:57 +0000 (0:00:01.131) 0:02:30.081 ****** 2025-09-08 00:49:59.835175 | orchestrator | =============================================================================== 2025-09-08 00:49:59.835183 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.10s 2025-09-08 00:49:59.835191 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.50s 2025-09-08 00:49:59.835199 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.05s 2025-09-08 00:49:59.835206 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.64s 2025-09-08 00:49:59.835215 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.27s 2025-09-08 00:49:59.835223 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.92s 2025-09-08 00:49:59.835231 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.51s 2025-09-08 00:49:59.835238 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.88s 2025-09-08 00:49:59.835246 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.72s 2025-09-08 00:49:59.835254 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.43s 2025-09-08 00:49:59.835262 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.15s 2025-09-08 00:49:59.835270 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.76s 2025-09-08 00:49:59.835278 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.70s 2025-09-08 00:49:59.835286 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.66s 2025-09-08 00:49:59.835293 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.65s 2025-09-08 00:49:59.835301 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.62s 2025-09-08 00:49:59.835309 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-09-08 00:49:59.835317 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.29s 2025-09-08 00:49:59.835325 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.25s 2025-09-08 00:49:59.835333 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.22s 2025-09-08 00:49:59.835341 | orchestrator | 2025-09-08 00:49:59 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:49:59.835349 | orchestrator | 2025-09-08 00:49:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:02.868598 | orchestrator | 2025-09-08 00:50:02 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:02.871110 | orchestrator | 2025-09-08 00:50:02 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:02.871225 | orchestrator | 2025-09-08 00:50:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:05.920549 | orchestrator | 2025-09-08 00:50:05 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:05.920824 | orchestrator | 2025-09-08 00:50:05 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:05.920849 | orchestrator | 2025-09-08 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:08.967945 | orchestrator | 2025-09-08 00:50:08 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:08.970192 | orchestrator | 2025-09-08 00:50:08 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:08.970238 | orchestrator | 2025-09-08 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:12.019619 | orchestrator | 2025-09-08 00:50:12 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:12.019729 | orchestrator | 2025-09-08 00:50:12 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:12.019746 | orchestrator | 2025-09-08 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:15.060798 | orchestrator | 2025-09-08 00:50:15 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:15.062408 | orchestrator | 2025-09-08 00:50:15 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:15.062471 | orchestrator | 2025-09-08 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:18.108505 | orchestrator | 2025-09-08 00:50:18 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:18.108977 | orchestrator | 2025-09-08 00:50:18 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:18.109009 | orchestrator | 2025-09-08 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:21.151326 | orchestrator | 2025-09-08 00:50:21 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:21.152443 | orchestrator | 2025-09-08 00:50:21 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:21.153038 | orchestrator | 2025-09-08 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:24.193508 | orchestrator | 2025-09-08 00:50:24 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:24.195162 | orchestrator | 2025-09-08 00:50:24 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:24.195396 | orchestrator | 2025-09-08 00:50:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:27.237200 | orchestrator | 2025-09-08 00:50:27 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:27.237303 | orchestrator | 2025-09-08 00:50:27 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:27.237318 | orchestrator | 2025-09-08 00:50:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:30.281839 | orchestrator | 2025-09-08 00:50:30 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:30.288203 | orchestrator | 2025-09-08 00:50:30 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:30.288255 | orchestrator | 2025-09-08 00:50:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:33.334660 | orchestrator | 2025-09-08 00:50:33 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:33.336089 | orchestrator | 2025-09-08 00:50:33 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:33.336934 | orchestrator | 2025-09-08 00:50:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:36.381178 | orchestrator | 2025-09-08 00:50:36 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:36.382185 | orchestrator | 2025-09-08 00:50:36 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:36.382208 | orchestrator | 2025-09-08 00:50:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:39.425196 | orchestrator | 2025-09-08 00:50:39 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:39.425612 | orchestrator | 2025-09-08 00:50:39 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:39.426212 | orchestrator | 2025-09-08 00:50:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:42.471931 | orchestrator | 2025-09-08 00:50:42 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:42.472736 | orchestrator | 2025-09-08 00:50:42 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:42.472880 | orchestrator | 2025-09-08 00:50:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:45.509226 | orchestrator | 2025-09-08 00:50:45 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:45.510177 | orchestrator | 2025-09-08 00:50:45 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:45.510425 | orchestrator | 2025-09-08 00:50:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:48.558824 | orchestrator | 2025-09-08 00:50:48 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:48.561514 | orchestrator | 2025-09-08 00:50:48 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:48.561765 | orchestrator | 2025-09-08 00:50:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:51.604848 | orchestrator | 2025-09-08 00:50:51 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:51.606486 | orchestrator | 2025-09-08 00:50:51 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:51.606535 | orchestrator | 2025-09-08 00:50:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:54.667761 | orchestrator | 2025-09-08 00:50:54 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:54.669735 | orchestrator | 2025-09-08 00:50:54 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:54.669768 | orchestrator | 2025-09-08 00:50:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:50:57.705085 | orchestrator | 2025-09-08 00:50:57 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:50:57.706280 | orchestrator | 2025-09-08 00:50:57 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:50:57.706384 | orchestrator | 2025-09-08 00:50:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:00.756154 | orchestrator | 2025-09-08 00:51:00 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:00.758803 | orchestrator | 2025-09-08 00:51:00 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:00.758854 | orchestrator | 2025-09-08 00:51:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:03.810652 | orchestrator | 2025-09-08 00:51:03 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:03.812133 | orchestrator | 2025-09-08 00:51:03 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:03.812162 | orchestrator | 2025-09-08 00:51:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:06.858184 | orchestrator | 2025-09-08 00:51:06 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:06.858504 | orchestrator | 2025-09-08 00:51:06 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:06.860357 | orchestrator | 2025-09-08 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:09.904180 | orchestrator | 2025-09-08 00:51:09 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:09.904310 | orchestrator | 2025-09-08 00:51:09 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:09.904325 | orchestrator | 2025-09-08 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:12.949533 | orchestrator | 2025-09-08 00:51:12 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:12.949875 | orchestrator | 2025-09-08 00:51:12 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:12.949921 | orchestrator | 2025-09-08 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:16.003114 | orchestrator | 2025-09-08 00:51:16 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:16.004916 | orchestrator | 2025-09-08 00:51:16 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:16.004944 | orchestrator | 2025-09-08 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:19.056379 | orchestrator | 2025-09-08 00:51:19 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:19.058410 | orchestrator | 2025-09-08 00:51:19 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:19.058502 | orchestrator | 2025-09-08 00:51:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:22.110860 | orchestrator | 2025-09-08 00:51:22 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:22.112687 | orchestrator | 2025-09-08 00:51:22 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:22.113122 | orchestrator | 2025-09-08 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:25.165140 | orchestrator | 2025-09-08 00:51:25 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:25.167100 | orchestrator | 2025-09-08 00:51:25 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:25.167676 | orchestrator | 2025-09-08 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:28.219808 | orchestrator | 2025-09-08 00:51:28 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:28.221215 | orchestrator | 2025-09-08 00:51:28 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:28.221243 | orchestrator | 2025-09-08 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:31.265859 | orchestrator | 2025-09-08 00:51:31 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:31.271646 | orchestrator | 2025-09-08 00:51:31 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:31.271680 | orchestrator | 2025-09-08 00:51:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:34.305641 | orchestrator | 2025-09-08 00:51:34 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:34.306282 | orchestrator | 2025-09-08 00:51:34 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:34.306313 | orchestrator | 2025-09-08 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:37.348399 | orchestrator | 2025-09-08 00:51:37 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:37.348695 | orchestrator | 2025-09-08 00:51:37 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:37.348871 | orchestrator | 2025-09-08 00:51:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:40.392700 | orchestrator | 2025-09-08 00:51:40 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:40.394982 | orchestrator | 2025-09-08 00:51:40 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:40.395056 | orchestrator | 2025-09-08 00:51:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:43.449110 | orchestrator | 2025-09-08 00:51:43 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:43.450704 | orchestrator | 2025-09-08 00:51:43 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:43.450822 | orchestrator | 2025-09-08 00:51:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:46.494375 | orchestrator | 2025-09-08 00:51:46 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:46.495235 | orchestrator | 2025-09-08 00:51:46 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:46.495266 | orchestrator | 2025-09-08 00:51:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:49.546304 | orchestrator | 2025-09-08 00:51:49 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:49.549364 | orchestrator | 2025-09-08 00:51:49 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:49.549447 | orchestrator | 2025-09-08 00:51:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:52.602737 | orchestrator | 2025-09-08 00:51:52 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:52.607621 | orchestrator | 2025-09-08 00:51:52 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:52.607928 | orchestrator | 2025-09-08 00:51:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:55.653808 | orchestrator | 2025-09-08 00:51:55 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:55.654301 | orchestrator | 2025-09-08 00:51:55 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:55.654333 | orchestrator | 2025-09-08 00:51:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:51:58.712029 | orchestrator | 2025-09-08 00:51:58 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:51:58.713506 | orchestrator | 2025-09-08 00:51:58 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:51:58.714704 | orchestrator | 2025-09-08 00:51:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:01.762096 | orchestrator | 2025-09-08 00:52:01 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:01.765941 | orchestrator | 2025-09-08 00:52:01 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:01.765975 | orchestrator | 2025-09-08 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:04.809793 | orchestrator | 2025-09-08 00:52:04 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:04.810289 | orchestrator | 2025-09-08 00:52:04 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:04.810404 | orchestrator | 2025-09-08 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:07.849262 | orchestrator | 2025-09-08 00:52:07 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:07.853466 | orchestrator | 2025-09-08 00:52:07 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:07.853502 | orchestrator | 2025-09-08 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:10.900305 | orchestrator | 2025-09-08 00:52:10 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:10.902104 | orchestrator | 2025-09-08 00:52:10 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:10.902344 | orchestrator | 2025-09-08 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:13.995046 | orchestrator | 2025-09-08 00:52:13 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:13.996257 | orchestrator | 2025-09-08 00:52:13 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:13.996288 | orchestrator | 2025-09-08 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:17.039770 | orchestrator | 2025-09-08 00:52:17 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:17.041755 | orchestrator | 2025-09-08 00:52:17 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:17.041788 | orchestrator | 2025-09-08 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:20.089627 | orchestrator | 2025-09-08 00:52:20 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:20.092215 | orchestrator | 2025-09-08 00:52:20 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:20.092246 | orchestrator | 2025-09-08 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:23.132858 | orchestrator | 2025-09-08 00:52:23 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:23.134859 | orchestrator | 2025-09-08 00:52:23 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:23.135736 | orchestrator | 2025-09-08 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:26.181331 | orchestrator | 2025-09-08 00:52:26 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:26.181911 | orchestrator | 2025-09-08 00:52:26 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:26.181944 | orchestrator | 2025-09-08 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:29.232616 | orchestrator | 2025-09-08 00:52:29 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:29.234614 | orchestrator | 2025-09-08 00:52:29 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:29.235089 | orchestrator | 2025-09-08 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:32.289429 | orchestrator | 2025-09-08 00:52:32 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:32.290688 | orchestrator | 2025-09-08 00:52:32 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:32.290968 | orchestrator | 2025-09-08 00:52:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:35.338711 | orchestrator | 2025-09-08 00:52:35 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:35.340831 | orchestrator | 2025-09-08 00:52:35 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:35.340882 | orchestrator | 2025-09-08 00:52:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:38.401210 | orchestrator | 2025-09-08 00:52:38 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:38.402996 | orchestrator | 2025-09-08 00:52:38 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:38.403033 | orchestrator | 2025-09-08 00:52:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:41.451721 | orchestrator | 2025-09-08 00:52:41 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:41.452391 | orchestrator | 2025-09-08 00:52:41 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:41.452429 | orchestrator | 2025-09-08 00:52:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:44.503321 | orchestrator | 2025-09-08 00:52:44 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:44.506316 | orchestrator | 2025-09-08 00:52:44 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:44.507122 | orchestrator | 2025-09-08 00:52:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:47.553820 | orchestrator | 2025-09-08 00:52:47 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:47.554164 | orchestrator | 2025-09-08 00:52:47 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:47.554468 | orchestrator | 2025-09-08 00:52:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:50.590670 | orchestrator | 2025-09-08 00:52:50 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:50.593242 | orchestrator | 2025-09-08 00:52:50 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state STARTED 2025-09-08 00:52:50.593465 | orchestrator | 2025-09-08 00:52:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:53.642237 | orchestrator | 2025-09-08 00:52:53 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:53.642353 | orchestrator | 2025-09-08 00:52:53 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:52:53.661480 | orchestrator | 2025-09-08 00:52:53 | INFO  | Task 4669c2b2-16a4-40d7-862c-1218fbf6a1f9 is in state SUCCESS 2025-09-08 00:52:53.662875 | orchestrator | 2025-09-08 00:52:53.662945 | orchestrator | 2025-09-08 00:52:53.662960 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:52:53.662974 | orchestrator | 2025-09-08 00:52:53.662986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:52:53.662998 | orchestrator | Monday 08 September 2025 00:46:08 +0000 (0:00:00.291) 0:00:00.291 ****** 2025-09-08 00:52:53.663011 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.663080 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.663093 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.663104 | orchestrator | 2025-09-08 00:52:53.663115 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:52:53.663128 | orchestrator | Monday 08 September 2025 00:46:09 +0000 (0:00:00.369) 0:00:00.660 ****** 2025-09-08 00:52:53.663140 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-08 00:52:53.663151 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-08 00:52:53.663163 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-08 00:52:53.663174 | orchestrator | 2025-09-08 00:52:53.663185 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-08 00:52:53.663196 | orchestrator | 2025-09-08 00:52:53.663207 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-08 00:52:53.663218 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:00.682) 0:00:01.343 ****** 2025-09-08 00:52:53.663253 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.663265 | orchestrator | 2025-09-08 00:52:53.663276 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-08 00:52:53.663287 | orchestrator | Monday 08 September 2025 00:46:10 +0000 (0:00:00.522) 0:00:01.865 ****** 2025-09-08 00:52:53.663298 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.663309 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.663320 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.663330 | orchestrator | 2025-09-08 00:52:53.663341 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-08 00:52:53.663427 | orchestrator | Monday 08 September 2025 00:46:12 +0000 (0:00:01.756) 0:00:03.622 ****** 2025-09-08 00:52:53.663440 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.663451 | orchestrator | 2025-09-08 00:52:53.663462 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-08 00:52:53.663473 | orchestrator | Monday 08 September 2025 00:46:12 +0000 (0:00:00.669) 0:00:04.291 ****** 2025-09-08 00:52:53.663486 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.663499 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.663531 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.663543 | orchestrator | 2025-09-08 00:52:53.663556 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-08 00:52:53.663569 | orchestrator | Monday 08 September 2025 00:46:13 +0000 (0:00:00.702) 0:00:04.994 ****** 2025-09-08 00:52:53.663582 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:52:53.663595 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:52:53.663608 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:52:53.663621 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:52:53.663633 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:52:53.663648 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-08 00:52:53.663667 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-08 00:52:53.663687 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-08 00:52:53.663707 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-08 00:52:53.663745 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-08 00:52:53.663766 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-08 00:52:53.663779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-08 00:52:53.663792 | orchestrator | 2025-09-08 00:52:53.663804 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-08 00:52:53.663817 | orchestrator | Monday 08 September 2025 00:46:17 +0000 (0:00:03.980) 0:00:08.974 ****** 2025-09-08 00:52:53.663831 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-08 00:52:53.663843 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-08 00:52:53.663854 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-08 00:52:53.663865 | orchestrator | 2025-09-08 00:52:53.663876 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-08 00:52:53.663887 | orchestrator | Monday 08 September 2025 00:46:19 +0000 (0:00:01.381) 0:00:10.355 ****** 2025-09-08 00:52:53.663898 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-08 00:52:53.663909 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-08 00:52:53.664030 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-08 00:52:53.664045 | orchestrator | 2025-09-08 00:52:53.664092 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-08 00:52:53.664141 | orchestrator | Monday 08 September 2025 00:46:21 +0000 (0:00:02.545) 0:00:12.901 ****** 2025-09-08 00:52:53.664153 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-08 00:52:53.664165 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.664190 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-08 00:52:53.664202 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.664213 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-08 00:52:53.664223 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.664234 | orchestrator | 2025-09-08 00:52:53.664245 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-08 00:52:53.664256 | orchestrator | Monday 08 September 2025 00:46:22 +0000 (0:00:00.880) 0:00:13.782 ****** 2025-09-08 00:52:53.664271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.664291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.664303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.664320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.664333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.664359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.664372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.664383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.664395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.664406 | orchestrator | 2025-09-08 00:52:53.664418 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-08 00:52:53.664429 | orchestrator | Monday 08 September 2025 00:46:24 +0000 (0:00:02.446) 0:00:16.229 ****** 2025-09-08 00:52:53.664440 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.664479 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.664491 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.664502 | orchestrator | 2025-09-08 00:52:53.664571 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-08 00:52:53.664583 | orchestrator | Monday 08 September 2025 00:46:27 +0000 (0:00:02.355) 0:00:18.584 ****** 2025-09-08 00:52:53.664671 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-08 00:52:53.664684 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-08 00:52:53.664695 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-08 00:52:53.664706 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-08 00:52:53.664717 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-08 00:52:53.664727 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-08 00:52:53.664738 | orchestrator | 2025-09-08 00:52:53.664749 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-08 00:52:53.664760 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:02.642) 0:00:21.227 ****** 2025-09-08 00:52:53.664783 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.664794 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.664805 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.664815 | orchestrator | 2025-09-08 00:52:53.664826 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-08 00:52:53.664837 | orchestrator | Monday 08 September 2025 00:46:32 +0000 (0:00:02.460) 0:00:23.687 ****** 2025-09-08 00:52:53.664854 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.664865 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.664876 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.664887 | orchestrator | 2025-09-08 00:52:53.664898 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-08 00:52:53.664909 | orchestrator | Monday 08 September 2025 00:46:34 +0000 (0:00:02.523) 0:00:26.211 ****** 2025-09-08 00:52:53.664920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.664994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.665009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.665022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:52:53.665034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.665088 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.665107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.665119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.665137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:52:53.665149 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.665160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.665171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.665183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.665201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:52:53.665241 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.665252 | orchestrator | 2025-09-08 00:52:53.665294 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-08 00:52:53.665306 | orchestrator | Monday 08 September 2025 00:46:35 +0000 (0:00:00.836) 0:00:27.047 ****** 2025-09-08 00:52:53.665323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.665435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:52:53.665460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.665491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:52:53.665503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.665677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9', '__omit_place_holder__55e58b6d309cfc2d9ffea7da64e47723438a98d9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-08 00:52:53.665688 | orchestrator | 2025-09-08 00:52:53.665699 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-08 00:52:53.665711 | orchestrator | Monday 08 September 2025 00:46:38 +0000 (0:00:03.264) 0:00:30.312 ****** 2025-09-08 00:52:53.665728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.665850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.665866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.665878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.665889 | orchestrator | 2025-09-08 00:52:53.665900 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-08 00:52:53.665911 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:03.945) 0:00:34.258 ****** 2025-09-08 00:52:53.665922 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-08 00:52:53.670221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-08 00:52:53.670293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-08 00:52:53.670309 | orchestrator | 2025-09-08 00:52:53.670321 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-08 00:52:53.670332 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:05.105) 0:00:39.363 ****** 2025-09-08 00:52:53.670344 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-08 00:52:53.670355 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-08 00:52:53.670365 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-08 00:52:53.670404 | orchestrator | 2025-09-08 00:52:53.670414 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-08 00:52:53.670424 | orchestrator | Monday 08 September 2025 00:46:52 +0000 (0:00:04.558) 0:00:43.921 ****** 2025-09-08 00:52:53.670434 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.670444 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.670453 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.670463 | orchestrator | 2025-09-08 00:52:53.670472 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-08 00:52:53.670482 | orchestrator | Monday 08 September 2025 00:46:53 +0000 (0:00:00.702) 0:00:44.624 ****** 2025-09-08 00:52:53.670492 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-08 00:52:53.670503 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-08 00:52:53.670542 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-08 00:52:53.670552 | orchestrator | 2025-09-08 00:52:53.670562 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-08 00:52:53.670572 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:03.375) 0:00:47.999 ****** 2025-09-08 00:52:53.670581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-08 00:52:53.670591 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-08 00:52:53.670601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-08 00:52:53.670611 | orchestrator | 2025-09-08 00:52:53.670621 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-08 00:52:53.670631 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:02.470) 0:00:50.470 ****** 2025-09-08 00:52:53.670640 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-08 00:52:53.670651 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-08 00:52:53.670661 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-08 00:52:53.670670 | orchestrator | 2025-09-08 00:52:53.670680 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-08 00:52:53.670690 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:01.707) 0:00:52.177 ****** 2025-09-08 00:52:53.670700 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-08 00:52:53.670709 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-08 00:52:53.670719 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-08 00:52:53.670729 | orchestrator | 2025-09-08 00:52:53.670738 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-08 00:52:53.670748 | orchestrator | Monday 08 September 2025 00:47:02 +0000 (0:00:01.575) 0:00:53.753 ****** 2025-09-08 00:52:53.670771 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.670781 | orchestrator | 2025-09-08 00:52:53.670791 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-08 00:52:53.670800 | orchestrator | Monday 08 September 2025 00:47:03 +0000 (0:00:01.099) 0:00:54.852 ****** 2025-09-08 00:52:53.670813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.670847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.670859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.670869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.670880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.670895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.670906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.670923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.670943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.670953 | orchestrator | 2025-09-08 00:52:53.670963 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-08 00:52:53.670973 | orchestrator | Monday 08 September 2025 00:47:07 +0000 (0:00:03.926) 0:00:58.779 ****** 2025-09-08 00:52:53.670983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.670994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671024 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.671034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671142 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.671153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671183 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.671193 | orchestrator | 2025-09-08 00:52:53.671203 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-08 00:52:53.671213 | orchestrator | Monday 08 September 2025 00:47:09 +0000 (0:00:02.033) 0:01:00.812 ****** 2025-09-08 00:52:53.671227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671272 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.671282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671312 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.671322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671364 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.671373 | orchestrator | 2025-09-08 00:52:53.671383 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-08 00:52:53.671393 | orchestrator | Monday 08 September 2025 00:47:13 +0000 (0:00:03.643) 0:01:04.456 ****** 2025-09-08 00:52:53.671408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671439 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.671449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671488 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.671503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671557 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.671566 | orchestrator | 2025-09-08 00:52:53.671576 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-08 00:52:53.671585 | orchestrator | Monday 08 September 2025 00:47:15 +0000 (0:00:02.109) 0:01:06.566 ****** 2025-09-08 00:52:53.671595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671637 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.671647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671685 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.671695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671736 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.671746 | orchestrator | 2025-09-08 00:52:53.671755 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-08 00:52:53.671765 | orchestrator | Monday 08 September 2025 00:47:16 +0000 (0:00:00.908) 0:01:07.474 ****** 2025-09-08 00:52:53.671775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671813 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.671823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671860 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.671874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.671910 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.671920 | orchestrator | 2025-09-08 00:52:53.671930 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-08 00:52:53.671939 | orchestrator | Monday 08 September 2025 00:47:17 +0000 (0:00:01.206) 0:01:08.681 ****** 2025-09-08 00:52:53.671949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.671975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.671993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672003 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.672013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.672039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672055 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.672065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672085 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.672095 | orchestrator | 2025-09-08 00:52:53.672104 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-08 00:52:53.672114 | orchestrator | Monday 08 September 2025 00:47:18 +0000 (0:00:00.816) 0:01:09.498 ****** 2025-09-08 00:52:53.672128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.672139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672165 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.672175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.672193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672213 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.672223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.672237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672258 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.672268 | orchestrator | 2025-09-08 00:52:53.672277 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-08 00:52:53.672292 | orchestrator | Monday 08 September 2025 00:47:19 +0000 (0:00:01.136) 0:01:10.634 ****** 2025-09-08 00:52:53.672302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.672319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672339 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.672349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.672363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672383 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.672399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-08 00:52:53.672415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-08 00:52:53.672426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-08 00:52:53.672435 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.672445 | orchestrator | 2025-09-08 00:52:53.672455 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-08 00:52:53.672465 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:01.051) 0:01:11.685 ****** 2025-09-08 00:52:53.672474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-08 00:52:53.672484 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-08 00:52:53.672494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-08 00:52:53.672503 | orchestrator | 2025-09-08 00:52:53.672526 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-08 00:52:53.672536 | orchestrator | Monday 08 September 2025 00:47:22 +0000 (0:00:01.679) 0:01:13.364 ****** 2025-09-08 00:52:53.672545 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-08 00:52:53.672555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-08 00:52:53.672565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-08 00:52:53.672574 | orchestrator | 2025-09-08 00:52:53.672584 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-08 00:52:53.672594 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:02.011) 0:01:15.376 ****** 2025-09-08 00:52:53.672603 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 00:52:53.672617 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 00:52:53.672627 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 00:52:53.672637 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 00:52:53.672646 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.672656 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 00:52:53.672666 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.672676 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 00:52:53.672691 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.672701 | orchestrator | 2025-09-08 00:52:53.672710 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-08 00:52:53.672720 | orchestrator | Monday 08 September 2025 00:47:25 +0000 (0:00:01.386) 0:01:16.762 ****** 2025-09-08 00:52:53.672735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.672746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.672756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-08 00:52:53.672767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.672777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.672791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-08 00:52:53.672808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.672824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.672834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-08 00:52:53.672844 | orchestrator | 2025-09-08 00:52:53.672854 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-08 00:52:53.672864 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:03.906) 0:01:20.668 ****** 2025-09-08 00:52:53.672874 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.672884 | orchestrator | 2025-09-08 00:52:53.672893 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-08 00:52:53.672903 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:00.573) 0:01:21.241 ****** 2025-09-08 00:52:53.672914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-08 00:52:53.672925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.672946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.672957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.672972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-08 00:52:53.672983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.672993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-08 00:52:53.673037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.673052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673072 | orchestrator | 2025-09-08 00:52:53.673082 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-08 00:52:53.673092 | orchestrator | Monday 08 September 2025 00:47:34 +0000 (0:00:04.873) 0:01:26.114 ****** 2025-09-08 00:52:53.673102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-08 00:52:53.673112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.673132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673152 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.673167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-08 00:52:53.673178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.673188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673214 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.673228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-08 00:52:53.673238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.673253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673273 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.673283 | orchestrator | 2025-09-08 00:52:53.673293 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-08 00:52:53.673303 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.674) 0:01:26.788 ****** 2025-09-08 00:52:53.673313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:52:53.673324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:52:53.673333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:52:53.673352 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.673362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:52:53.673372 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.673381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:52:53.673391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-08 00:52:53.673400 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.673410 | orchestrator | 2025-09-08 00:52:53.673419 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-08 00:52:53.673429 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.983) 0:01:27.772 ****** 2025-09-08 00:52:53.673438 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.673448 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.673457 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.673467 | orchestrator | 2025-09-08 00:52:53.673476 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-08 00:52:53.673491 | orchestrator | Monday 08 September 2025 00:47:38 +0000 (0:00:01.711) 0:01:29.484 ****** 2025-09-08 00:52:53.673500 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.673525 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.673534 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.673544 | orchestrator | 2025-09-08 00:52:53.673553 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-08 00:52:53.673563 | orchestrator | Monday 08 September 2025 00:47:40 +0000 (0:00:02.056) 0:01:31.540 ****** 2025-09-08 00:52:53.673572 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.673582 | orchestrator | 2025-09-08 00:52:53.673591 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-08 00:52:53.673601 | orchestrator | Monday 08 September 2025 00:47:40 +0000 (0:00:00.691) 0:01:32.232 ****** 2025-09-08 00:52:53.673618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.673629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.673645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.673708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673735 | orchestrator | 2025-09-08 00:52:53.673744 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-08 00:52:53.673754 | orchestrator | Monday 08 September 2025 00:47:46 +0000 (0:00:05.420) 0:01:37.652 ****** 2025-09-08 00:52:53.673765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.673775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673802 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.673847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.673869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673889 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.673904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.673921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.673947 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.673957 | orchestrator | 2025-09-08 00:52:53.673967 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-08 00:52:53.673977 | orchestrator | Monday 08 September 2025 00:47:47 +0000 (0:00:00.887) 0:01:38.540 ****** 2025-09-08 00:52:53.673987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:52:53.673997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:52:53.674008 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.674053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:52:53.674063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:52:53.674073 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.674083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:52:53.674093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-08 00:52:53.674103 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.674112 | orchestrator | 2025-09-08 00:52:53.674122 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-08 00:52:53.674132 | orchestrator | Monday 08 September 2025 00:47:48 +0000 (0:00:00.998) 0:01:39.539 ****** 2025-09-08 00:52:53.674142 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.674152 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.674161 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.674171 | orchestrator | 2025-09-08 00:52:53.674180 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-08 00:52:53.674190 | orchestrator | Monday 08 September 2025 00:47:49 +0000 (0:00:01.393) 0:01:40.932 ****** 2025-09-08 00:52:53.674200 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.674210 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.674219 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.674229 | orchestrator | 2025-09-08 00:52:53.674243 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-08 00:52:53.674253 | orchestrator | Monday 08 September 2025 00:47:51 +0000 (0:00:02.101) 0:01:43.034 ****** 2025-09-08 00:52:53.674263 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.674272 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.674282 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.674291 | orchestrator | 2025-09-08 00:52:53.674301 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-08 00:52:53.674310 | orchestrator | Monday 08 September 2025 00:47:52 +0000 (0:00:00.494) 0:01:43.528 ****** 2025-09-08 00:52:53.674320 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.674330 | orchestrator | 2025-09-08 00:52:53.674339 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-08 00:52:53.674355 | orchestrator | Monday 08 September 2025 00:47:52 +0000 (0:00:00.699) 0:01:44.227 ****** 2025-09-08 00:52:53.674384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-08 00:52:53.674397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-08 00:52:53.674407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-08 00:52:53.674417 | orchestrator | 2025-09-08 00:52:53.674427 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-08 00:52:53.674437 | orchestrator | Monday 08 September 2025 00:47:55 +0000 (0:00:02.947) 0:01:47.175 ****** 2025-09-08 00:52:53.674452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-08 00:52:53.674462 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.674472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-08 00:52:53.674489 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.674530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-08 00:52:53.674542 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.674551 | orchestrator | 2025-09-08 00:52:53.674561 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-08 00:52:53.674571 | orchestrator | Monday 08 September 2025 00:47:57 +0000 (0:00:02.018) 0:01:49.193 ****** 2025-09-08 00:52:53.674582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:52:53.674594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:52:53.674605 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.674615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:52:53.674625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:52:53.674635 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.674650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:52:53.674666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-08 00:52:53.674676 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.674686 | orchestrator | 2025-09-08 00:52:53.674696 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-08 00:52:53.674705 | orchestrator | Monday 08 September 2025 00:47:59 +0000 (0:00:01.666) 0:01:50.859 ****** 2025-09-08 00:52:53.674715 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.674724 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.674734 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.674744 | orchestrator | 2025-09-08 00:52:53.674753 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-08 00:52:53.674763 | orchestrator | Monday 08 September 2025 00:47:59 +0000 (0:00:00.422) 0:01:51.282 ****** 2025-09-08 00:52:53.674772 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.674782 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.674792 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.674801 | orchestrator | 2025-09-08 00:52:53.674810 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-08 00:52:53.674826 | orchestrator | Monday 08 September 2025 00:48:01 +0000 (0:00:01.545) 0:01:52.827 ****** 2025-09-08 00:52:53.674836 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.674845 | orchestrator | 2025-09-08 00:52:53.674855 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-08 00:52:53.674865 | orchestrator | Monday 08 September 2025 00:48:02 +0000 (0:00:00.915) 0:01:53.743 ****** 2025-09-08 00:52:53.674875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.674886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.674897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.674917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.674933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.674944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.674955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.674965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.674986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.674997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675033 | orchestrator | 2025-09-08 00:52:53.675043 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-08 00:52:53.675053 | orchestrator | Monday 08 September 2025 00:48:06 +0000 (0:00:03.831) 0:01:57.575 ****** 2025-09-08 00:52:53.675063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.675087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.675104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.675115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675203 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.675214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675230 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.675241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675250 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.675260 | orchestrator | 2025-09-08 00:52:53.675270 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-08 00:52:53.675284 | orchestrator | Monday 08 September 2025 00:48:07 +0000 (0:00:00.989) 0:01:58.564 ****** 2025-09-08 00:52:53.675294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:52:53.675305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:52:53.675315 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.675325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:52:53.675334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:52:53.675344 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.675359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:52:53.675369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-08 00:52:53.675379 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.675389 | orchestrator | 2025-09-08 00:52:53.675399 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-08 00:52:53.675408 | orchestrator | Monday 08 September 2025 00:48:08 +0000 (0:00:01.719) 0:02:00.284 ****** 2025-09-08 00:52:53.675418 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.675428 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.675437 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.675447 | orchestrator | 2025-09-08 00:52:53.675457 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-08 00:52:53.675466 | orchestrator | Monday 08 September 2025 00:48:10 +0000 (0:00:01.475) 0:02:01.759 ****** 2025-09-08 00:52:53.675481 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.675491 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.675501 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.675525 | orchestrator | 2025-09-08 00:52:53.675535 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-08 00:52:53.675545 | orchestrator | Monday 08 September 2025 00:48:12 +0000 (0:00:02.173) 0:02:03.933 ****** 2025-09-08 00:52:53.675555 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.675564 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.675574 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.675583 | orchestrator | 2025-09-08 00:52:53.675593 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-08 00:52:53.675603 | orchestrator | Monday 08 September 2025 00:48:12 +0000 (0:00:00.336) 0:02:04.270 ****** 2025-09-08 00:52:53.675613 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.675622 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.675632 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.675641 | orchestrator | 2025-09-08 00:52:53.675651 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-08 00:52:53.675661 | orchestrator | Monday 08 September 2025 00:48:13 +0000 (0:00:00.510) 0:02:04.780 ****** 2025-09-08 00:52:53.675670 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.675680 | orchestrator | 2025-09-08 00:52:53.675689 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-08 00:52:53.675699 | orchestrator | Monday 08 September 2025 00:48:14 +0000 (0:00:00.848) 0:02:05.629 ****** 2025-09-08 00:52:53.675709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 00:52:53.675724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:52:53.675735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 00:52:53.675815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:52:53.675838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 00:52:53.675915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:52:53.675925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.675981 | orchestrator | 2025-09-08 00:52:53.675991 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-08 00:52:53.676001 | orchestrator | Monday 08 September 2025 00:48:18 +0000 (0:00:04.035) 0:02:09.665 ****** 2025-09-08 00:52:53.676022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 00:52:53.676033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:52:53.676043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676107 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.676123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 00:52:53.676134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:52:53.676144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676211 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.676222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 00:52:53.676232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 00:52:53.676242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.676310 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.676319 | orchestrator | 2025-09-08 00:52:53.676329 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-08 00:52:53.676339 | orchestrator | Monday 08 September 2025 00:48:19 +0000 (0:00:01.294) 0:02:10.959 ****** 2025-09-08 00:52:53.676349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:52:53.676359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:52:53.676368 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.676378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:52:53.676388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:52:53.676397 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.676407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:52:53.676416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-08 00:52:53.676433 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.676442 | orchestrator | 2025-09-08 00:52:53.676452 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-08 00:52:53.676462 | orchestrator | Monday 08 September 2025 00:48:20 +0000 (0:00:01.003) 0:02:11.963 ****** 2025-09-08 00:52:53.676471 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.676481 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.676491 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.676500 | orchestrator | 2025-09-08 00:52:53.676524 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-08 00:52:53.676534 | orchestrator | Monday 08 September 2025 00:48:21 +0000 (0:00:01.285) 0:02:13.249 ****** 2025-09-08 00:52:53.676548 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.676558 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.676567 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.676577 | orchestrator | 2025-09-08 00:52:53.676587 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-08 00:52:53.676596 | orchestrator | Monday 08 September 2025 00:48:24 +0000 (0:00:02.133) 0:02:15.383 ****** 2025-09-08 00:52:53.676606 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.676615 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.676625 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.676634 | orchestrator | 2025-09-08 00:52:53.676644 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-08 00:52:53.676653 | orchestrator | Monday 08 September 2025 00:48:24 +0000 (0:00:00.550) 0:02:15.933 ****** 2025-09-08 00:52:53.676663 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.676672 | orchestrator | 2025-09-08 00:52:53.676682 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-08 00:52:53.676692 | orchestrator | Monday 08 September 2025 00:48:25 +0000 (0:00:00.896) 0:02:16.829 ****** 2025-09-08 00:52:53.676711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 00:52:53.676729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.676949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 00:52:53.676970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.677003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 00:52:53.677016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.677033 | orchestrator | 2025-09-08 00:52:53.677043 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-08 00:52:53.677053 | orchestrator | Monday 08 September 2025 00:48:30 +0000 (0:00:04.687) 0:02:21.517 ****** 2025-09-08 00:52:53.677073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 00:52:53.677085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.677102 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.677122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 00:52:53.677140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.677157 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.677168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 00:52:53.677229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.677248 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.677263 | orchestrator | 2025-09-08 00:52:53.677273 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-08 00:52:53.677283 | orchestrator | Monday 08 September 2025 00:48:33 +0000 (0:00:03.152) 0:02:24.670 ****** 2025-09-08 00:52:53.677294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:52:53.677305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:52:53.677315 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.677325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:52:53.677339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:52:53.677350 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.677360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:52:53.677376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-08 00:52:53.677387 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.677396 | orchestrator | 2025-09-08 00:52:53.677406 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-08 00:52:53.677416 | orchestrator | Monday 08 September 2025 00:48:36 +0000 (0:00:03.041) 0:02:27.711 ****** 2025-09-08 00:52:53.677425 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.677441 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.677450 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.677460 | orchestrator | 2025-09-08 00:52:53.677469 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-08 00:52:53.677479 | orchestrator | Monday 08 September 2025 00:48:37 +0000 (0:00:01.326) 0:02:29.037 ****** 2025-09-08 00:52:53.677489 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.677498 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.677561 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.677574 | orchestrator | 2025-09-08 00:52:53.677585 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-08 00:52:53.677597 | orchestrator | Monday 08 September 2025 00:48:39 +0000 (0:00:02.114) 0:02:31.152 ****** 2025-09-08 00:52:53.677608 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.677619 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.677630 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.677642 | orchestrator | 2025-09-08 00:52:53.677654 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-08 00:52:53.677665 | orchestrator | Monday 08 September 2025 00:48:40 +0000 (0:00:00.546) 0:02:31.698 ****** 2025-09-08 00:52:53.677676 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.677687 | orchestrator | 2025-09-08 00:52:53.677699 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-08 00:52:53.677711 | orchestrator | Monday 08 September 2025 00:48:41 +0000 (0:00:00.867) 0:02:32.566 ****** 2025-09-08 00:52:53.677724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 00:52:53.677742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 00:52:53.677754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 00:52:53.677766 | orchestrator | 2025-09-08 00:52:53.677777 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-08 00:52:53.677789 | orchestrator | Monday 08 September 2025 00:48:44 +0000 (0:00:03.469) 0:02:36.036 ****** 2025-09-08 00:52:53.677807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 00:52:53.677826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 00:52:53.677838 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.677850 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.677862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 00:52:53.677873 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.677883 | orchestrator | 2025-09-08 00:52:53.677892 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-08 00:52:53.677901 | orchestrator | Monday 08 September 2025 00:48:45 +0000 (0:00:00.666) 0:02:36.702 ****** 2025-09-08 00:52:53.677910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:52:53.677918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:52:53.677926 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.677934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:52:53.677946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:52:53.677954 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.677962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:52:53.677970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-08 00:52:53.677978 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.677989 | orchestrator | 2025-09-08 00:52:53.677997 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-08 00:52:53.678005 | orchestrator | Monday 08 September 2025 00:48:46 +0000 (0:00:00.663) 0:02:37.366 ****** 2025-09-08 00:52:53.678036 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.678047 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.678055 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.678063 | orchestrator | 2025-09-08 00:52:53.678071 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-08 00:52:53.678079 | orchestrator | Monday 08 September 2025 00:48:47 +0000 (0:00:01.469) 0:02:38.836 ****** 2025-09-08 00:52:53.678087 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.678095 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.678103 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.678110 | orchestrator | 2025-09-08 00:52:53.678122 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-08 00:52:53.678131 | orchestrator | Monday 08 September 2025 00:48:49 +0000 (0:00:02.265) 0:02:41.101 ****** 2025-09-08 00:52:53.678139 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.678146 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.678154 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.678162 | orchestrator | 2025-09-08 00:52:53.678170 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-08 00:52:53.678178 | orchestrator | Monday 08 September 2025 00:48:50 +0000 (0:00:00.594) 0:02:41.695 ****** 2025-09-08 00:52:53.678185 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.678193 | orchestrator | 2025-09-08 00:52:53.678201 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-08 00:52:53.678209 | orchestrator | Monday 08 September 2025 00:48:51 +0000 (0:00:00.935) 0:02:42.631 ****** 2025-09-08 00:52:53.678218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:52:53.678248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:52:53.678264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:52:53.678278 | orchestrator | 2025-09-08 00:52:53.678286 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-08 00:52:53.678293 | orchestrator | Monday 08 September 2025 00:48:56 +0000 (0:00:05.216) 0:02:47.847 ****** 2025-09-08 00:52:53.678307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:52:53.678317 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.678330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:52:53.678343 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.678357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:52:53.678366 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.678374 | orchestrator | 2025-09-08 00:52:53.678382 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-08 00:52:53.678390 | orchestrator | Monday 08 September 2025 00:48:57 +0000 (0:00:01.393) 0:02:49.241 ****** 2025-09-08 00:52:53.678398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:52:53.678408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:52:53.678424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:52:53.678436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:52:53.678445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-08 00:52:53.678453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:52:53.678461 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.678469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:52:53.678482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:52:53.678491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:52:53.678499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:52:53.678548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-08 00:52:53.678557 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.678566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:52:53.678574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-08 00:52:53.678587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-08 00:52:53.678595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-08 00:52:53.678603 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.678611 | orchestrator | 2025-09-08 00:52:53.678619 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-08 00:52:53.678627 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:01.094) 0:02:50.336 ****** 2025-09-08 00:52:53.678635 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.678642 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.678650 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.678658 | orchestrator | 2025-09-08 00:52:53.678666 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-08 00:52:53.678674 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:01.494) 0:02:51.830 ****** 2025-09-08 00:52:53.678682 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.678689 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.678701 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.678709 | orchestrator | 2025-09-08 00:52:53.678717 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-08 00:52:53.678725 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:02.138) 0:02:53.969 ****** 2025-09-08 00:52:53.678733 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.678741 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.678748 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.678756 | orchestrator | 2025-09-08 00:52:53.678764 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-08 00:52:53.678772 | orchestrator | Monday 08 September 2025 00:49:03 +0000 (0:00:00.550) 0:02:54.520 ****** 2025-09-08 00:52:53.678780 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.678788 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.678795 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.678803 | orchestrator | 2025-09-08 00:52:53.678811 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-08 00:52:53.678819 | orchestrator | Monday 08 September 2025 00:49:03 +0000 (0:00:00.315) 0:02:54.835 ****** 2025-09-08 00:52:53.678826 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.678834 | orchestrator | 2025-09-08 00:52:53.678842 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-08 00:52:53.678850 | orchestrator | Monday 08 September 2025 00:49:04 +0000 (0:00:00.977) 0:02:55.812 ****** 2025-09-08 00:52:53.678864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:52:53.678874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:52:53.678887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:52:53.678901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:52:53.678910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:52:53.678923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:52:53.678932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:52:53.678948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:52:53.678957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:52:53.678965 | orchestrator | 2025-09-08 00:52:53.678973 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-08 00:52:53.678981 | orchestrator | Monday 08 September 2025 00:49:08 +0000 (0:00:03.591) 0:02:59.404 ****** 2025-09-08 00:52:53.678993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:52:53.679006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:52:53.679015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:52:53.679028 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.679036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:52:53.679045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:52:53.679057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:52:53.679065 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.679078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:52:53.679086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:52:53.679097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:52:53.679104 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.679111 | orchestrator | 2025-09-08 00:52:53.679117 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-08 00:52:53.679124 | orchestrator | Monday 08 September 2025 00:49:08 +0000 (0:00:00.638) 0:03:00.042 ****** 2025-09-08 00:52:53.679131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:52:53.679138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:52:53.679145 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.679152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:52:53.679159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:52:53.679165 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.679176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:52:53.679183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-08 00:52:53.679190 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.679196 | orchestrator | 2025-09-08 00:52:53.679203 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-08 00:52:53.679210 | orchestrator | Monday 08 September 2025 00:49:09 +0000 (0:00:00.848) 0:03:00.891 ****** 2025-09-08 00:52:53.679216 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.679223 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.679229 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.679236 | orchestrator | 2025-09-08 00:52:53.679243 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-08 00:52:53.679253 | orchestrator | Monday 08 September 2025 00:49:11 +0000 (0:00:01.746) 0:03:02.638 ****** 2025-09-08 00:52:53.679260 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.679267 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.679273 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.679280 | orchestrator | 2025-09-08 00:52:53.679286 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-08 00:52:53.679296 | orchestrator | Monday 08 September 2025 00:49:13 +0000 (0:00:02.033) 0:03:04.672 ****** 2025-09-08 00:52:53.679303 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.679310 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.679316 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.679323 | orchestrator | 2025-09-08 00:52:53.679330 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-08 00:52:53.679336 | orchestrator | Monday 08 September 2025 00:49:13 +0000 (0:00:00.343) 0:03:05.015 ****** 2025-09-08 00:52:53.679343 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.679350 | orchestrator | 2025-09-08 00:52:53.679356 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-08 00:52:53.679363 | orchestrator | Monday 08 September 2025 00:49:14 +0000 (0:00:01.014) 0:03:06.030 ****** 2025-09-08 00:52:53.679370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 00:52:53.679378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 00:52:53.679400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 00:52:53.679419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679426 | orchestrator | 2025-09-08 00:52:53.679433 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-08 00:52:53.679440 | orchestrator | Monday 08 September 2025 00:49:18 +0000 (0:00:03.874) 0:03:09.904 ****** 2025-09-08 00:52:53.679447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 00:52:53.679459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679471 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.679481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 00:52:53.679489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679496 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.679503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 00:52:53.679523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679530 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.679536 | orchestrator | 2025-09-08 00:52:53.679547 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-08 00:52:53.679554 | orchestrator | Monday 08 September 2025 00:49:19 +0000 (0:00:00.663) 0:03:10.568 ****** 2025-09-08 00:52:53.679564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:52:53.679571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:52:53.679578 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.679585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:52:53.679592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:52:53.679599 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.679605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:52:53.679612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-08 00:52:53.679623 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.679630 | orchestrator | 2025-09-08 00:52:53.679636 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-08 00:52:53.679643 | orchestrator | Monday 08 September 2025 00:49:20 +0000 (0:00:00.894) 0:03:11.463 ****** 2025-09-08 00:52:53.679650 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.679656 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.679663 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.679669 | orchestrator | 2025-09-08 00:52:53.679676 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-08 00:52:53.679683 | orchestrator | Monday 08 September 2025 00:49:22 +0000 (0:00:01.875) 0:03:13.339 ****** 2025-09-08 00:52:53.679689 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.679696 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.679702 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.679709 | orchestrator | 2025-09-08 00:52:53.679716 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-08 00:52:53.679722 | orchestrator | Monday 08 September 2025 00:49:24 +0000 (0:00:02.121) 0:03:15.460 ****** 2025-09-08 00:52:53.679729 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.679736 | orchestrator | 2025-09-08 00:52:53.679742 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-08 00:52:53.679749 | orchestrator | Monday 08 September 2025 00:49:25 +0000 (0:00:01.554) 0:03:17.015 ****** 2025-09-08 00:52:53.679756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-08 00:52:53.679767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-08 00:52:53.679803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-08 00:52:53.679839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679864 | orchestrator | 2025-09-08 00:52:53.679871 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-08 00:52:53.679878 | orchestrator | Monday 08 September 2025 00:49:29 +0000 (0:00:04.216) 0:03:21.231 ****** 2025-09-08 00:52:53.679885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-08 00:52:53.679896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679920 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.679931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-08 00:52:53.679938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679963 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.679974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-08 00:52:53.679984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.679998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.680010 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.680016 | orchestrator | 2025-09-08 00:52:53.680023 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-08 00:52:53.680030 | orchestrator | Monday 08 September 2025 00:49:30 +0000 (0:00:00.811) 0:03:22.042 ****** 2025-09-08 00:52:53.680037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:52:53.680044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:52:53.680050 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.680057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:52:53.680064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:52:53.680070 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.680077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:52:53.680084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-08 00:52:53.680090 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.680097 | orchestrator | 2025-09-08 00:52:53.680107 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-08 00:52:53.680114 | orchestrator | Monday 08 September 2025 00:49:31 +0000 (0:00:00.874) 0:03:22.917 ****** 2025-09-08 00:52:53.680120 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.680127 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.680133 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.680140 | orchestrator | 2025-09-08 00:52:53.680146 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-08 00:52:53.680153 | orchestrator | Monday 08 September 2025 00:49:32 +0000 (0:00:01.327) 0:03:24.244 ****** 2025-09-08 00:52:53.680160 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.680166 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.680173 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.680179 | orchestrator | 2025-09-08 00:52:53.680186 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-08 00:52:53.680193 | orchestrator | Monday 08 September 2025 00:49:35 +0000 (0:00:02.251) 0:03:26.496 ****** 2025-09-08 00:52:53.680199 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.680206 | orchestrator | 2025-09-08 00:52:53.680212 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-08 00:52:53.680219 | orchestrator | Monday 08 September 2025 00:49:36 +0000 (0:00:01.424) 0:03:27.921 ****** 2025-09-08 00:52:53.680225 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:52:53.680232 | orchestrator | 2025-09-08 00:52:53.680239 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-08 00:52:53.680245 | orchestrator | Monday 08 September 2025 00:49:39 +0000 (0:00:02.958) 0:03:30.880 ****** 2025-09-08 00:52:53.680383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:52:53.680398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:52:53.680406 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.680453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:52:53.680469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:52:53.680476 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.680484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:52:53.680495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:52:53.680502 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.680524 | orchestrator | 2025-09-08 00:52:53.680531 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-08 00:52:53.680538 | orchestrator | Monday 08 September 2025 00:49:42 +0000 (0:00:02.469) 0:03:33.349 ****** 2025-09-08 00:52:53.680595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:52:53.680611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:52:53.680618 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.680629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:52:53.680684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:52:53.680694 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.680702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:52:53.680709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-08 00:52:53.680716 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.680723 | orchestrator | 2025-09-08 00:52:53.680730 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-08 00:52:53.680737 | orchestrator | Monday 08 September 2025 00:49:44 +0000 (0:00:02.450) 0:03:35.800 ****** 2025-09-08 00:52:53.680744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:52:53.680798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:52:53.680809 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.680816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:52:53.680823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:52:53.680830 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.680837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:52:53.680880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-08 00:52:53.680892 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.680899 | orchestrator | 2025-09-08 00:52:53.680906 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-08 00:52:53.680912 | orchestrator | Monday 08 September 2025 00:49:46 +0000 (0:00:02.491) 0:03:38.291 ****** 2025-09-08 00:52:53.680919 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.680926 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.680932 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.680947 | orchestrator | 2025-09-08 00:52:53.680957 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-08 00:52:53.680964 | orchestrator | Monday 08 September 2025 00:49:49 +0000 (0:00:02.157) 0:03:40.449 ****** 2025-09-08 00:52:53.680971 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.680977 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.680984 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.680990 | orchestrator | 2025-09-08 00:52:53.680997 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-08 00:52:53.681004 | orchestrator | Monday 08 September 2025 00:49:50 +0000 (0:00:01.634) 0:03:42.083 ****** 2025-09-08 00:52:53.681010 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.681017 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.681023 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.681030 | orchestrator | 2025-09-08 00:52:53.681036 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-08 00:52:53.681043 | orchestrator | Monday 08 September 2025 00:49:51 +0000 (0:00:00.600) 0:03:42.684 ****** 2025-09-08 00:52:53.681050 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.681056 | orchestrator | 2025-09-08 00:52:53.681063 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-08 00:52:53.681069 | orchestrator | Monday 08 September 2025 00:49:52 +0000 (0:00:01.157) 0:03:43.841 ****** 2025-09-08 00:52:53.681125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-08 00:52:53.681136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-08 00:52:53.681143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-08 00:52:53.681151 | orchestrator | 2025-09-08 00:52:53.681157 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-08 00:52:53.681164 | orchestrator | Monday 08 September 2025 00:49:54 +0000 (0:00:01.481) 0:03:45.323 ****** 2025-09-08 00:52:53.681180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-08 00:52:53.681187 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.681194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-08 00:52:53.681201 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.681251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-08 00:52:53.681261 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.681268 | orchestrator | 2025-09-08 00:52:53.681274 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-08 00:52:53.681281 | orchestrator | Monday 08 September 2025 00:49:54 +0000 (0:00:00.718) 0:03:46.041 ****** 2025-09-08 00:52:53.681288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-08 00:52:53.681296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-08 00:52:53.681303 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.681310 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.681317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-08 00:52:53.681324 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.681330 | orchestrator | 2025-09-08 00:52:53.681337 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-08 00:52:53.681349 | orchestrator | Monday 08 September 2025 00:49:55 +0000 (0:00:00.667) 0:03:46.709 ****** 2025-09-08 00:52:53.681355 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.681362 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.681369 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.681375 | orchestrator | 2025-09-08 00:52:53.681382 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-08 00:52:53.681389 | orchestrator | Monday 08 September 2025 00:49:55 +0000 (0:00:00.456) 0:03:47.165 ****** 2025-09-08 00:52:53.681396 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.681402 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.681409 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.681415 | orchestrator | 2025-09-08 00:52:53.681422 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-08 00:52:53.681429 | orchestrator | Monday 08 September 2025 00:49:57 +0000 (0:00:01.416) 0:03:48.582 ****** 2025-09-08 00:52:53.681436 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.681442 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.681449 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.681455 | orchestrator | 2025-09-08 00:52:53.681462 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-08 00:52:53.681469 | orchestrator | Monday 08 September 2025 00:49:57 +0000 (0:00:00.597) 0:03:49.179 ****** 2025-09-08 00:52:53.681475 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.681482 | orchestrator | 2025-09-08 00:52:53.681489 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-08 00:52:53.681496 | orchestrator | Monday 08 September 2025 00:49:59 +0000 (0:00:01.248) 0:03:50.428 ****** 2025-09-08 00:52:53.681519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 00:52:53.681572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 00:52:53.681595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:52:53.681684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:52:53.681711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.681815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.681888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.681975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.681988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.681995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.682014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.682090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 00:52:53.682105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.682123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:52:53.682211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.682301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.682391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.682398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682405 | orchestrator | 2025-09-08 00:52:53.682412 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-08 00:52:53.682419 | orchestrator | Monday 08 September 2025 00:50:03 +0000 (0:00:04.797) 0:03:55.225 ****** 2025-09-08 00:52:53.682426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 00:52:53.682437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:52:53.682538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.682631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 00:52:53.682638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:52:53.682762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.682812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.682830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.682844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 00:52:53.682866 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.682917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.682941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.682964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:52:53.683013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.683030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-08 00:52:53.683038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.683113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.683121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.683128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.683135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683158 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.683165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.683190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-08 00:52:53.683205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-08 00:52:53.683212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-08 00:52:53.683235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-08 00:52:53.683259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683267 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.683274 | orchestrator | 2025-09-08 00:52:53.683281 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-08 00:52:53.683288 | orchestrator | Monday 08 September 2025 00:50:05 +0000 (0:00:01.610) 0:03:56.836 ****** 2025-09-08 00:52:53.683295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:52:53.683303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:52:53.683310 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.683317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:52:53.683323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:52:53.683330 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.683342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:52:53.683349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-08 00:52:53.683356 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.683363 | orchestrator | 2025-09-08 00:52:53.683370 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-08 00:52:53.683376 | orchestrator | Monday 08 September 2025 00:50:07 +0000 (0:00:01.574) 0:03:58.411 ****** 2025-09-08 00:52:53.683383 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.683390 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.683396 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.683403 | orchestrator | 2025-09-08 00:52:53.683410 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-08 00:52:53.683416 | orchestrator | Monday 08 September 2025 00:50:08 +0000 (0:00:01.823) 0:04:00.235 ****** 2025-09-08 00:52:53.683423 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.683430 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.683436 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.683443 | orchestrator | 2025-09-08 00:52:53.683449 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-08 00:52:53.683456 | orchestrator | Monday 08 September 2025 00:50:11 +0000 (0:00:02.084) 0:04:02.319 ****** 2025-09-08 00:52:53.683466 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.683473 | orchestrator | 2025-09-08 00:52:53.683480 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-08 00:52:53.683486 | orchestrator | Monday 08 September 2025 00:50:12 +0000 (0:00:01.233) 0:04:03.553 ****** 2025-09-08 00:52:53.683494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.683565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.683575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.683591 | orchestrator | 2025-09-08 00:52:53.683597 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-08 00:52:53.683604 | orchestrator | Monday 08 September 2025 00:50:15 +0000 (0:00:03.361) 0:04:06.915 ****** 2025-09-08 00:52:53.683611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.683618 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.683629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.683636 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.683661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.683669 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.683680 | orchestrator | 2025-09-08 00:52:53.683687 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-08 00:52:53.683694 | orchestrator | Monday 08 September 2025 00:50:16 +0000 (0:00:00.956) 0:04:07.871 ****** 2025-09-08 00:52:53.683701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:52:53.683708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:52:53.683715 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.683721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:52:53.683728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:52:53.683735 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.683742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:52:53.683749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-08 00:52:53.683755 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.683762 | orchestrator | 2025-09-08 00:52:53.683769 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-08 00:52:53.683775 | orchestrator | Monday 08 September 2025 00:50:17 +0000 (0:00:00.806) 0:04:08.678 ****** 2025-09-08 00:52:53.683782 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.683789 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.683795 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.683802 | orchestrator | 2025-09-08 00:52:53.683809 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-08 00:52:53.683815 | orchestrator | Monday 08 September 2025 00:50:18 +0000 (0:00:01.367) 0:04:10.046 ****** 2025-09-08 00:52:53.683822 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.683829 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.683835 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.683842 | orchestrator | 2025-09-08 00:52:53.683852 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-08 00:52:53.683860 | orchestrator | Monday 08 September 2025 00:50:20 +0000 (0:00:02.052) 0:04:12.098 ****** 2025-09-08 00:52:53.683866 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.683873 | orchestrator | 2025-09-08 00:52:53.683880 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-08 00:52:53.683886 | orchestrator | Monday 08 September 2025 00:50:22 +0000 (0:00:01.641) 0:04:13.740 ****** 2025-09-08 00:52:53.683912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.683925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.683949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.683992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.683999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.684005 | orchestrator | 2025-09-08 00:52:53.684012 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-08 00:52:53.684018 | orchestrator | Monday 08 September 2025 00:50:26 +0000 (0:00:04.449) 0:04:18.189 ****** 2025-09-08 00:52:53.684028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.684058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.684065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.684072 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.684079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.684088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.684095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.684106 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.684129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.684137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.684143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.684150 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.684156 | orchestrator | 2025-09-08 00:52:53.684163 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-08 00:52:53.684169 | orchestrator | Monday 08 September 2025 00:50:27 +0000 (0:00:00.699) 0:04:18.888 ****** 2025-09-08 00:52:53.684175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684209 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.684216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684265 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.684271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-08 00:52:53.684290 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.684297 | orchestrator | 2025-09-08 00:52:53.684303 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-08 00:52:53.684309 | orchestrator | Monday 08 September 2025 00:50:28 +0000 (0:00:01.329) 0:04:20.218 ****** 2025-09-08 00:52:53.684316 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.684322 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.684328 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.684334 | orchestrator | 2025-09-08 00:52:53.684340 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-08 00:52:53.684347 | orchestrator | Monday 08 September 2025 00:50:30 +0000 (0:00:01.355) 0:04:21.573 ****** 2025-09-08 00:52:53.684353 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.684359 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.684365 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.684371 | orchestrator | 2025-09-08 00:52:53.684377 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-08 00:52:53.684384 | orchestrator | Monday 08 September 2025 00:50:32 +0000 (0:00:02.032) 0:04:23.606 ****** 2025-09-08 00:52:53.684390 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.684396 | orchestrator | 2025-09-08 00:52:53.684402 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-08 00:52:53.684408 | orchestrator | Monday 08 September 2025 00:50:33 +0000 (0:00:01.600) 0:04:25.206 ****** 2025-09-08 00:52:53.684415 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-08 00:52:53.684421 | orchestrator | 2025-09-08 00:52:53.684427 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-08 00:52:53.684433 | orchestrator | Monday 08 September 2025 00:50:34 +0000 (0:00:00.831) 0:04:26.037 ****** 2025-09-08 00:52:53.684444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-08 00:52:53.684451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-08 00:52:53.684458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-08 00:52:53.684464 | orchestrator | 2025-09-08 00:52:53.684470 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-08 00:52:53.684477 | orchestrator | Monday 08 September 2025 00:50:39 +0000 (0:00:04.391) 0:04:30.429 ****** 2025-09-08 00:52:53.684544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684558 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.684565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684571 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.684578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684584 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.684591 | orchestrator | 2025-09-08 00:52:53.684597 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-08 00:52:53.684603 | orchestrator | Monday 08 September 2025 00:50:40 +0000 (0:00:01.435) 0:04:31.864 ****** 2025-09-08 00:52:53.684610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:52:53.684622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:52:53.684629 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.684635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:52:53.684642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:52:53.684648 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.684654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:52:53.684664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-08 00:52:53.684670 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.684676 | orchestrator | 2025-09-08 00:52:53.684682 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-08 00:52:53.684689 | orchestrator | Monday 08 September 2025 00:50:42 +0000 (0:00:01.567) 0:04:33.431 ****** 2025-09-08 00:52:53.684695 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.684701 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.684707 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.684713 | orchestrator | 2025-09-08 00:52:53.684719 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-08 00:52:53.684725 | orchestrator | Monday 08 September 2025 00:50:44 +0000 (0:00:02.540) 0:04:35.972 ****** 2025-09-08 00:52:53.684731 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.684737 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.684744 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.684750 | orchestrator | 2025-09-08 00:52:53.684756 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-08 00:52:53.684762 | orchestrator | Monday 08 September 2025 00:50:47 +0000 (0:00:03.131) 0:04:39.104 ****** 2025-09-08 00:52:53.684769 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-08 00:52:53.684775 | orchestrator | 2025-09-08 00:52:53.684781 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-08 00:52:53.684807 | orchestrator | Monday 08 September 2025 00:50:49 +0000 (0:00:01.484) 0:04:40.589 ****** 2025-09-08 00:52:53.684815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684822 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.684828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684839 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.684846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684852 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.684859 | orchestrator | 2025-09-08 00:52:53.684865 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-08 00:52:53.684871 | orchestrator | Monday 08 September 2025 00:50:50 +0000 (0:00:01.231) 0:04:41.820 ****** 2025-09-08 00:52:53.684878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684884 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.684894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684900 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.684907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-08 00:52:53.684913 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.684919 | orchestrator | 2025-09-08 00:52:53.684925 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-08 00:52:53.684932 | orchestrator | Monday 08 September 2025 00:50:51 +0000 (0:00:01.444) 0:04:43.265 ****** 2025-09-08 00:52:53.684938 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.684944 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.684950 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.684956 | orchestrator | 2025-09-08 00:52:53.684962 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-08 00:52:53.684986 | orchestrator | Monday 08 September 2025 00:50:53 +0000 (0:00:01.792) 0:04:45.058 ****** 2025-09-08 00:52:53.684993 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.685005 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.685011 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.685017 | orchestrator | 2025-09-08 00:52:53.685023 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-08 00:52:53.685029 | orchestrator | Monday 08 September 2025 00:50:56 +0000 (0:00:02.346) 0:04:47.405 ****** 2025-09-08 00:52:53.685036 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.685042 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.685048 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.685054 | orchestrator | 2025-09-08 00:52:53.685060 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-08 00:52:53.685067 | orchestrator | Monday 08 September 2025 00:50:59 +0000 (0:00:03.019) 0:04:50.424 ****** 2025-09-08 00:52:53.685073 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-08 00:52:53.685079 | orchestrator | 2025-09-08 00:52:53.685085 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-08 00:52:53.685091 | orchestrator | Monday 08 September 2025 00:50:59 +0000 (0:00:00.854) 0:04:51.278 ****** 2025-09-08 00:52:53.685098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:52:53.685104 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.685111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:52:53.685117 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.685123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:52:53.685130 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.685136 | orchestrator | 2025-09-08 00:52:53.685142 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-08 00:52:53.685149 | orchestrator | Monday 08 September 2025 00:51:01 +0000 (0:00:01.414) 0:04:52.693 ****** 2025-09-08 00:52:53.685158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:52:53.685165 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.685175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:52:53.685182 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.685206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-08 00:52:53.685213 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.685219 | orchestrator | 2025-09-08 00:52:53.685226 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-08 00:52:53.685232 | orchestrator | Monday 08 September 2025 00:51:02 +0000 (0:00:01.376) 0:04:54.069 ****** 2025-09-08 00:52:53.685238 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.685244 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.685251 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.685257 | orchestrator | 2025-09-08 00:52:53.685263 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-08 00:52:53.685269 | orchestrator | Monday 08 September 2025 00:51:04 +0000 (0:00:01.438) 0:04:55.508 ****** 2025-09-08 00:52:53.685275 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.685281 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.685288 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.685294 | orchestrator | 2025-09-08 00:52:53.685300 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-08 00:52:53.685306 | orchestrator | Monday 08 September 2025 00:51:06 +0000 (0:00:02.363) 0:04:57.871 ****** 2025-09-08 00:52:53.685312 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.685318 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.685324 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.685331 | orchestrator | 2025-09-08 00:52:53.685337 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-08 00:52:53.685343 | orchestrator | Monday 08 September 2025 00:51:09 +0000 (0:00:02.986) 0:05:00.857 ****** 2025-09-08 00:52:53.685349 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.685355 | orchestrator | 2025-09-08 00:52:53.685361 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-08 00:52:53.685367 | orchestrator | Monday 08 September 2025 00:51:11 +0000 (0:00:01.613) 0:05:02.471 ****** 2025-09-08 00:52:53.685374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.685389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:52:53.685396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.685434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.685440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:52:53.685454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.685492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.685498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:52:53.685517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.685545 | orchestrator | 2025-09-08 00:52:53.685552 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-08 00:52:53.685558 | orchestrator | Monday 08 September 2025 00:51:14 +0000 (0:00:03.570) 0:05:06.042 ****** 2025-09-08 00:52:53.685584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.685591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:52:53.685598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.685624 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.685634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.685658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:52:53.685665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.685689 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.685698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.685705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 00:52:53.685729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 00:52:53.685743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 00:52:53.685750 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.685756 | orchestrator | 2025-09-08 00:52:53.685767 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-08 00:52:53.685773 | orchestrator | Monday 08 September 2025 00:51:15 +0000 (0:00:01.038) 0:05:07.081 ****** 2025-09-08 00:52:53.685779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:52:53.685785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:52:53.685792 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.685798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:52:53.685804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:52:53.685811 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.685817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:52:53.685826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-08 00:52:53.685833 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.685839 | orchestrator | 2025-09-08 00:52:53.685845 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-08 00:52:53.685851 | orchestrator | Monday 08 September 2025 00:51:17 +0000 (0:00:01.361) 0:05:08.442 ****** 2025-09-08 00:52:53.685857 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.685864 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.685870 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.685876 | orchestrator | 2025-09-08 00:52:53.685882 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-08 00:52:53.685888 | orchestrator | Monday 08 September 2025 00:51:18 +0000 (0:00:01.373) 0:05:09.816 ****** 2025-09-08 00:52:53.685894 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.685901 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.685907 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.685913 | orchestrator | 2025-09-08 00:52:53.685919 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-08 00:52:53.685925 | orchestrator | Monday 08 September 2025 00:51:20 +0000 (0:00:02.120) 0:05:11.937 ****** 2025-09-08 00:52:53.685931 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.685937 | orchestrator | 2025-09-08 00:52:53.685944 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-08 00:52:53.685950 | orchestrator | Monday 08 September 2025 00:51:22 +0000 (0:00:01.716) 0:05:13.653 ****** 2025-09-08 00:52:53.685973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:52:53.685987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:52:53.685993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:52:53.686004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:52:53.686059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:52:53.686074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:52:53.686081 | orchestrator | 2025-09-08 00:52:53.686087 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-08 00:52:53.686094 | orchestrator | Monday 08 September 2025 00:51:27 +0000 (0:00:05.399) 0:05:19.052 ****** 2025-09-08 00:52:53.686100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:52:53.686111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:52:53.686118 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.686143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:52:53.686155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:52:53.686162 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.686168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:52:53.686178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:52:53.686185 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.686192 | orchestrator | 2025-09-08 00:52:53.686198 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-08 00:52:53.686204 | orchestrator | Monday 08 September 2025 00:51:28 +0000 (0:00:00.704) 0:05:19.757 ****** 2025-09-08 00:52:53.686211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-08 00:52:53.686233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:52:53.686245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:52:53.686252 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.686258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-08 00:52:53.686264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:52:53.686271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:52:53.686277 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.686283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-08 00:52:53.686290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:52:53.686296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-08 00:52:53.686303 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.686309 | orchestrator | 2025-09-08 00:52:53.686315 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-08 00:52:53.686321 | orchestrator | Monday 08 September 2025 00:51:30 +0000 (0:00:01.572) 0:05:21.330 ****** 2025-09-08 00:52:53.686328 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.686334 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.686340 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.686346 | orchestrator | 2025-09-08 00:52:53.686353 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-08 00:52:53.686359 | orchestrator | Monday 08 September 2025 00:51:30 +0000 (0:00:00.457) 0:05:21.788 ****** 2025-09-08 00:52:53.686365 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.686371 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.686377 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.686383 | orchestrator | 2025-09-08 00:52:53.686390 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-08 00:52:53.686396 | orchestrator | Monday 08 September 2025 00:51:31 +0000 (0:00:01.310) 0:05:23.098 ****** 2025-09-08 00:52:53.686402 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.686408 | orchestrator | 2025-09-08 00:52:53.686415 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-08 00:52:53.686421 | orchestrator | Monday 08 September 2025 00:51:33 +0000 (0:00:01.815) 0:05:24.914 ****** 2025-09-08 00:52:53.686433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 00:52:53.686444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:52:53.686468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 00:52:53.686501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:52:53.686522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 00:52:53.686574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:52:53.686581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 00:52:53.686618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:52:53.686624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 00:52:53.686665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:52:53.686671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 00:52:53.686706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:52:53.686715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686735 | orchestrator | 2025-09-08 00:52:53.686741 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-08 00:52:53.686748 | orchestrator | Monday 08 September 2025 00:51:37 +0000 (0:00:04.350) 0:05:29.265 ****** 2025-09-08 00:52:53.686754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 00:52:53.686768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:52:53.686777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 00:52:53.686808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:52:53.686818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686840 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.686850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 00:52:53.686857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:52:53.686864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 00:52:53.686881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 00:52:53.686897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 00:52:53.686921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:52:53.686944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 00:52:53.686968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.686975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.686987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-08 00:52:53.686994 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.687010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 00:52:53.687020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 00:52:53.687026 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687033 | orchestrator | 2025-09-08 00:52:53.687039 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-08 00:52:53.687045 | orchestrator | Monday 08 September 2025 00:51:38 +0000 (0:00:00.940) 0:05:30.205 ****** 2025-09-08 00:52:53.687052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-08 00:52:53.687058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-08 00:52:53.687065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:52:53.687077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-08 00:52:53.687083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:52:53.687091 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-08 00:52:53.687104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:52:53.687111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:52:53.687117 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-08 00:52:53.687133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-08 00:52:53.687139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:52:53.687146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-08 00:52:53.687152 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687159 | orchestrator | 2025-09-08 00:52:53.687165 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-08 00:52:53.687171 | orchestrator | Monday 08 September 2025 00:51:40 +0000 (0:00:01.337) 0:05:31.543 ****** 2025-09-08 00:52:53.687178 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687184 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687190 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687196 | orchestrator | 2025-09-08 00:52:53.687202 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-08 00:52:53.687209 | orchestrator | Monday 08 September 2025 00:51:40 +0000 (0:00:00.520) 0:05:32.063 ****** 2025-09-08 00:52:53.687218 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687224 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687230 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687236 | orchestrator | 2025-09-08 00:52:53.687243 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-08 00:52:53.687249 | orchestrator | Monday 08 September 2025 00:51:42 +0000 (0:00:01.337) 0:05:33.401 ****** 2025-09-08 00:52:53.687255 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.687265 | orchestrator | 2025-09-08 00:52:53.687272 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-08 00:52:53.687278 | orchestrator | Monday 08 September 2025 00:51:43 +0000 (0:00:01.433) 0:05:34.834 ****** 2025-09-08 00:52:53.687284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:52:53.687292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:52:53.687302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-08 00:52:53.687309 | orchestrator | 2025-09-08 00:52:53.687315 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-08 00:52:53.687321 | orchestrator | Monday 08 September 2025 00:51:46 +0000 (0:00:03.000) 0:05:37.835 ****** 2025-09-08 00:52:53.687331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-08 00:52:53.687342 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-08 00:52:53.687356 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-08 00:52:53.687369 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687375 | orchestrator | 2025-09-08 00:52:53.687381 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-08 00:52:53.687387 | orchestrator | Monday 08 September 2025 00:51:46 +0000 (0:00:00.426) 0:05:38.262 ****** 2025-09-08 00:52:53.687397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-08 00:52:53.687404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-08 00:52:53.687410 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687416 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-08 00:52:53.687429 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687435 | orchestrator | 2025-09-08 00:52:53.687441 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-08 00:52:53.687447 | orchestrator | Monday 08 September 2025 00:51:47 +0000 (0:00:00.663) 0:05:38.926 ****** 2025-09-08 00:52:53.687457 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687464 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687470 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687476 | orchestrator | 2025-09-08 00:52:53.687482 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-08 00:52:53.687488 | orchestrator | Monday 08 September 2025 00:51:48 +0000 (0:00:00.878) 0:05:39.805 ****** 2025-09-08 00:52:53.687494 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687501 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687544 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687551 | orchestrator | 2025-09-08 00:52:53.687557 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-08 00:52:53.687567 | orchestrator | Monday 08 September 2025 00:51:49 +0000 (0:00:01.347) 0:05:41.153 ****** 2025-09-08 00:52:53.687573 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:52:53.687580 | orchestrator | 2025-09-08 00:52:53.687586 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-08 00:52:53.687592 | orchestrator | Monday 08 September 2025 00:51:51 +0000 (0:00:01.558) 0:05:42.711 ****** 2025-09-08 00:52:53.687599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.687606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.687616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.687627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.687638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.687645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-08 00:52:53.687651 | orchestrator | 2025-09-08 00:52:53.687658 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-08 00:52:53.687664 | orchestrator | Monday 08 September 2025 00:51:58 +0000 (0:00:06.628) 0:05:49.340 ****** 2025-09-08 00:52:53.687673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.687684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.687691 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.687708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.687714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.687737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-08 00:52:53.687744 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687751 | orchestrator | 2025-09-08 00:52:53.687757 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-08 00:52:53.687763 | orchestrator | Monday 08 September 2025 00:51:58 +0000 (0:00:00.687) 0:05:50.027 ****** 2025-09-08 00:52:53.687769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687798 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687836 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-08 00:52:53.687866 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687872 | orchestrator | 2025-09-08 00:52:53.687878 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-08 00:52:53.687885 | orchestrator | Monday 08 September 2025 00:51:59 +0000 (0:00:00.931) 0:05:50.959 ****** 2025-09-08 00:52:53.687891 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.687897 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.687903 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.687910 | orchestrator | 2025-09-08 00:52:53.687919 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-08 00:52:53.687926 | orchestrator | Monday 08 September 2025 00:52:01 +0000 (0:00:02.215) 0:05:53.175 ****** 2025-09-08 00:52:53.687932 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.687938 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.687945 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.687951 | orchestrator | 2025-09-08 00:52:53.687957 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-08 00:52:53.687963 | orchestrator | Monday 08 September 2025 00:52:04 +0000 (0:00:02.207) 0:05:55.383 ****** 2025-09-08 00:52:53.687969 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.687976 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.687982 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.687988 | orchestrator | 2025-09-08 00:52:53.687994 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-08 00:52:53.688001 | orchestrator | Monday 08 September 2025 00:52:04 +0000 (0:00:00.338) 0:05:55.721 ****** 2025-09-08 00:52:53.688007 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688013 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688019 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688025 | orchestrator | 2025-09-08 00:52:53.688032 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-08 00:52:53.688038 | orchestrator | Monday 08 September 2025 00:52:04 +0000 (0:00:00.337) 0:05:56.059 ****** 2025-09-08 00:52:53.688044 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688050 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688057 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688063 | orchestrator | 2025-09-08 00:52:53.688069 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-08 00:52:53.688074 | orchestrator | Monday 08 September 2025 00:52:05 +0000 (0:00:00.310) 0:05:56.369 ****** 2025-09-08 00:52:53.688082 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688087 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688093 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688098 | orchestrator | 2025-09-08 00:52:53.688104 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-08 00:52:53.688109 | orchestrator | Monday 08 September 2025 00:52:05 +0000 (0:00:00.683) 0:05:57.053 ****** 2025-09-08 00:52:53.688115 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688120 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688125 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688131 | orchestrator | 2025-09-08 00:52:53.688136 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-08 00:52:53.688142 | orchestrator | Monday 08 September 2025 00:52:06 +0000 (0:00:00.328) 0:05:57.381 ****** 2025-09-08 00:52:53.688147 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688153 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688158 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688164 | orchestrator | 2025-09-08 00:52:53.688173 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-08 00:52:53.688179 | orchestrator | Monday 08 September 2025 00:52:06 +0000 (0:00:00.549) 0:05:57.931 ****** 2025-09-08 00:52:53.688184 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688189 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688195 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688200 | orchestrator | 2025-09-08 00:52:53.688205 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-08 00:52:53.688211 | orchestrator | Monday 08 September 2025 00:52:07 +0000 (0:00:01.024) 0:05:58.955 ****** 2025-09-08 00:52:53.688216 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688222 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688227 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688233 | orchestrator | 2025-09-08 00:52:53.688238 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-08 00:52:53.688244 | orchestrator | Monday 08 September 2025 00:52:08 +0000 (0:00:00.367) 0:05:59.323 ****** 2025-09-08 00:52:53.688249 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688254 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688260 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688265 | orchestrator | 2025-09-08 00:52:53.688271 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-08 00:52:53.688276 | orchestrator | Monday 08 September 2025 00:52:09 +0000 (0:00:00.998) 0:06:00.321 ****** 2025-09-08 00:52:53.688282 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688287 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688292 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688298 | orchestrator | 2025-09-08 00:52:53.688303 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-08 00:52:53.688309 | orchestrator | Monday 08 September 2025 00:52:09 +0000 (0:00:00.921) 0:06:01.243 ****** 2025-09-08 00:52:53.688314 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688319 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688325 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688330 | orchestrator | 2025-09-08 00:52:53.688336 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-08 00:52:53.688341 | orchestrator | Monday 08 September 2025 00:52:11 +0000 (0:00:01.240) 0:06:02.484 ****** 2025-09-08 00:52:53.688347 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.688352 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.688358 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.688363 | orchestrator | 2025-09-08 00:52:53.688369 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-08 00:52:53.688374 | orchestrator | Monday 08 September 2025 00:52:21 +0000 (0:00:10.188) 0:06:12.673 ****** 2025-09-08 00:52:53.688379 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688385 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688390 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688396 | orchestrator | 2025-09-08 00:52:53.688401 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-08 00:52:53.688407 | orchestrator | Monday 08 September 2025 00:52:22 +0000 (0:00:00.769) 0:06:13.442 ****** 2025-09-08 00:52:53.688412 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.688418 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.688423 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.688428 | orchestrator | 2025-09-08 00:52:53.688434 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-08 00:52:53.688442 | orchestrator | Monday 08 September 2025 00:52:31 +0000 (0:00:09.013) 0:06:22.456 ****** 2025-09-08 00:52:53.688448 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688454 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688459 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688464 | orchestrator | 2025-09-08 00:52:53.688470 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-08 00:52:53.688475 | orchestrator | Monday 08 September 2025 00:52:35 +0000 (0:00:04.811) 0:06:27.268 ****** 2025-09-08 00:52:53.688485 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:52:53.688490 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:52:53.688496 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:52:53.688501 | orchestrator | 2025-09-08 00:52:53.688518 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-08 00:52:53.688523 | orchestrator | Monday 08 September 2025 00:52:45 +0000 (0:00:09.640) 0:06:36.908 ****** 2025-09-08 00:52:53.688529 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688534 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688540 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688545 | orchestrator | 2025-09-08 00:52:53.688550 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-08 00:52:53.688556 | orchestrator | Monday 08 September 2025 00:52:45 +0000 (0:00:00.365) 0:06:37.274 ****** 2025-09-08 00:52:53.688561 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688567 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688572 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688578 | orchestrator | 2025-09-08 00:52:53.688583 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-08 00:52:53.688589 | orchestrator | Monday 08 September 2025 00:52:46 +0000 (0:00:00.372) 0:06:37.646 ****** 2025-09-08 00:52:53.688594 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688600 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688608 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688613 | orchestrator | 2025-09-08 00:52:53.688619 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-08 00:52:53.688624 | orchestrator | Monday 08 September 2025 00:52:46 +0000 (0:00:00.345) 0:06:37.992 ****** 2025-09-08 00:52:53.688630 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688635 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688641 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688646 | orchestrator | 2025-09-08 00:52:53.688651 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-08 00:52:53.688657 | orchestrator | Monday 08 September 2025 00:52:47 +0000 (0:00:00.748) 0:06:38.740 ****** 2025-09-08 00:52:53.688662 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688668 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688673 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688678 | orchestrator | 2025-09-08 00:52:53.688684 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-08 00:52:53.688689 | orchestrator | Monday 08 September 2025 00:52:47 +0000 (0:00:00.367) 0:06:39.107 ****** 2025-09-08 00:52:53.688695 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:52:53.688700 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:52:53.688706 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:52:53.688711 | orchestrator | 2025-09-08 00:52:53.688716 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-08 00:52:53.688722 | orchestrator | Monday 08 September 2025 00:52:48 +0000 (0:00:00.405) 0:06:39.513 ****** 2025-09-08 00:52:53.688727 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688733 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688738 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688743 | orchestrator | 2025-09-08 00:52:53.688749 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-08 00:52:53.688754 | orchestrator | Monday 08 September 2025 00:52:49 +0000 (0:00:01.350) 0:06:40.863 ****** 2025-09-08 00:52:53.688760 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:52:53.688765 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:52:53.688771 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:52:53.688776 | orchestrator | 2025-09-08 00:52:53.688782 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:52:53.688787 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-08 00:52:53.688796 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-08 00:52:53.688802 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-08 00:52:53.688808 | orchestrator | 2025-09-08 00:52:53.688813 | orchestrator | 2025-09-08 00:52:53.688819 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:52:53.688824 | orchestrator | Monday 08 September 2025 00:52:50 +0000 (0:00:01.294) 0:06:42.158 ****** 2025-09-08 00:52:53.688829 | orchestrator | =============================================================================== 2025-09-08 00:52:53.688835 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.19s 2025-09-08 00:52:53.688840 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.64s 2025-09-08 00:52:53.688846 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.01s 2025-09-08 00:52:53.688851 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.63s 2025-09-08 00:52:53.688856 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.42s 2025-09-08 00:52:53.688862 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.40s 2025-09-08 00:52:53.688867 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.22s 2025-09-08 00:52:53.688873 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 5.11s 2025-09-08 00:52:53.688878 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.87s 2025-09-08 00:52:53.688884 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.81s 2025-09-08 00:52:53.688889 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.80s 2025-09-08 00:52:53.688894 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.69s 2025-09-08 00:52:53.688900 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.56s 2025-09-08 00:52:53.688905 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.45s 2025-09-08 00:52:53.688911 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.39s 2025-09-08 00:52:53.688946 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.35s 2025-09-08 00:52:53.688956 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.22s 2025-09-08 00:52:53.688962 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.04s 2025-09-08 00:52:53.688967 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.98s 2025-09-08 00:52:53.688972 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.95s 2025-09-08 00:52:53.688978 | orchestrator | 2025-09-08 00:52:53 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:52:53.688983 | orchestrator | 2025-09-08 00:52:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:56.713906 | orchestrator | 2025-09-08 00:52:56 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:56.716198 | orchestrator | 2025-09-08 00:52:56 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:52:56.720296 | orchestrator | 2025-09-08 00:52:56 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:52:56.720345 | orchestrator | 2025-09-08 00:52:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:52:59.775569 | orchestrator | 2025-09-08 00:52:59 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:52:59.777293 | orchestrator | 2025-09-08 00:52:59 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:52:59.779767 | orchestrator | 2025-09-08 00:52:59 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:52:59.780280 | orchestrator | 2025-09-08 00:52:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:02.831950 | orchestrator | 2025-09-08 00:53:02 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:02.832321 | orchestrator | 2025-09-08 00:53:02 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:02.833009 | orchestrator | 2025-09-08 00:53:02 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:02.833282 | orchestrator | 2025-09-08 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:05.862811 | orchestrator | 2025-09-08 00:53:05 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:05.862960 | orchestrator | 2025-09-08 00:53:05 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:05.863286 | orchestrator | 2025-09-08 00:53:05 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:05.863308 | orchestrator | 2025-09-08 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:08.890446 | orchestrator | 2025-09-08 00:53:08 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:08.890622 | orchestrator | 2025-09-08 00:53:08 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:08.891145 | orchestrator | 2025-09-08 00:53:08 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:08.892133 | orchestrator | 2025-09-08 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:11.944556 | orchestrator | 2025-09-08 00:53:11 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:11.947162 | orchestrator | 2025-09-08 00:53:11 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:11.948570 | orchestrator | 2025-09-08 00:53:11 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:11.948743 | orchestrator | 2025-09-08 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:14.989811 | orchestrator | 2025-09-08 00:53:14 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:14.990183 | orchestrator | 2025-09-08 00:53:14 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:14.991880 | orchestrator | 2025-09-08 00:53:14 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:14.991972 | orchestrator | 2025-09-08 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:18.048772 | orchestrator | 2025-09-08 00:53:18 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:18.048890 | orchestrator | 2025-09-08 00:53:18 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:18.048906 | orchestrator | 2025-09-08 00:53:18 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:18.048918 | orchestrator | 2025-09-08 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:21.079454 | orchestrator | 2025-09-08 00:53:21 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:21.080138 | orchestrator | 2025-09-08 00:53:21 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:21.081634 | orchestrator | 2025-09-08 00:53:21 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:21.081924 | orchestrator | 2025-09-08 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:24.138664 | orchestrator | 2025-09-08 00:53:24 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:24.143014 | orchestrator | 2025-09-08 00:53:24 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:24.144995 | orchestrator | 2025-09-08 00:53:24 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:24.145020 | orchestrator | 2025-09-08 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:27.195957 | orchestrator | 2025-09-08 00:53:27 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:27.196448 | orchestrator | 2025-09-08 00:53:27 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:27.200028 | orchestrator | 2025-09-08 00:53:27 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:27.200056 | orchestrator | 2025-09-08 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:30.253914 | orchestrator | 2025-09-08 00:53:30 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:30.255812 | orchestrator | 2025-09-08 00:53:30 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:30.265086 | orchestrator | 2025-09-08 00:53:30 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:30.265641 | orchestrator | 2025-09-08 00:53:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:33.308480 | orchestrator | 2025-09-08 00:53:33 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:33.311612 | orchestrator | 2025-09-08 00:53:33 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:33.315543 | orchestrator | 2025-09-08 00:53:33 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:33.315572 | orchestrator | 2025-09-08 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:36.356894 | orchestrator | 2025-09-08 00:53:36 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:36.359965 | orchestrator | 2025-09-08 00:53:36 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:36.362200 | orchestrator | 2025-09-08 00:53:36 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:36.362226 | orchestrator | 2025-09-08 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:39.412116 | orchestrator | 2025-09-08 00:53:39 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:39.414102 | orchestrator | 2025-09-08 00:53:39 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:39.415710 | orchestrator | 2025-09-08 00:53:39 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:39.415735 | orchestrator | 2025-09-08 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:42.457618 | orchestrator | 2025-09-08 00:53:42 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:42.461141 | orchestrator | 2025-09-08 00:53:42 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:42.463247 | orchestrator | 2025-09-08 00:53:42 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:42.463296 | orchestrator | 2025-09-08 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:45.514176 | orchestrator | 2025-09-08 00:53:45 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:45.518353 | orchestrator | 2025-09-08 00:53:45 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:45.522120 | orchestrator | 2025-09-08 00:53:45 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:45.522149 | orchestrator | 2025-09-08 00:53:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:48.552520 | orchestrator | 2025-09-08 00:53:48 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:48.552642 | orchestrator | 2025-09-08 00:53:48 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:48.553061 | orchestrator | 2025-09-08 00:53:48 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:48.554839 | orchestrator | 2025-09-08 00:53:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:51.604326 | orchestrator | 2025-09-08 00:53:51 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:51.605732 | orchestrator | 2025-09-08 00:53:51 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:51.607470 | orchestrator | 2025-09-08 00:53:51 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:51.607594 | orchestrator | 2025-09-08 00:53:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:54.658471 | orchestrator | 2025-09-08 00:53:54 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:54.660937 | orchestrator | 2025-09-08 00:53:54 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:54.663283 | orchestrator | 2025-09-08 00:53:54 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:54.663398 | orchestrator | 2025-09-08 00:53:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:53:57.712474 | orchestrator | 2025-09-08 00:53:57 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:53:57.713826 | orchestrator | 2025-09-08 00:53:57 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:53:57.715825 | orchestrator | 2025-09-08 00:53:57 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:53:57.715851 | orchestrator | 2025-09-08 00:53:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:00.768046 | orchestrator | 2025-09-08 00:54:00 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:00.768846 | orchestrator | 2025-09-08 00:54:00 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:00.770123 | orchestrator | 2025-09-08 00:54:00 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:00.770152 | orchestrator | 2025-09-08 00:54:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:03.811761 | orchestrator | 2025-09-08 00:54:03 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:03.812058 | orchestrator | 2025-09-08 00:54:03 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:03.812960 | orchestrator | 2025-09-08 00:54:03 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:03.812994 | orchestrator | 2025-09-08 00:54:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:06.861235 | orchestrator | 2025-09-08 00:54:06 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:06.862700 | orchestrator | 2025-09-08 00:54:06 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:06.864628 | orchestrator | 2025-09-08 00:54:06 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:06.865139 | orchestrator | 2025-09-08 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:09.909062 | orchestrator | 2025-09-08 00:54:09 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:09.911231 | orchestrator | 2025-09-08 00:54:09 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:09.912999 | orchestrator | 2025-09-08 00:54:09 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:09.913025 | orchestrator | 2025-09-08 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:12.954475 | orchestrator | 2025-09-08 00:54:12 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:12.955509 | orchestrator | 2025-09-08 00:54:12 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:12.957168 | orchestrator | 2025-09-08 00:54:12 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:12.957213 | orchestrator | 2025-09-08 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:16.011396 | orchestrator | 2025-09-08 00:54:16 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:16.013128 | orchestrator | 2025-09-08 00:54:16 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:16.015231 | orchestrator | 2025-09-08 00:54:16 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:16.015651 | orchestrator | 2025-09-08 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:19.057584 | orchestrator | 2025-09-08 00:54:19 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:19.059862 | orchestrator | 2025-09-08 00:54:19 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:19.062333 | orchestrator | 2025-09-08 00:54:19 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:19.062693 | orchestrator | 2025-09-08 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:22.113716 | orchestrator | 2025-09-08 00:54:22 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:22.117663 | orchestrator | 2025-09-08 00:54:22 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:22.120057 | orchestrator | 2025-09-08 00:54:22 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:22.120347 | orchestrator | 2025-09-08 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:25.174652 | orchestrator | 2025-09-08 00:54:25 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:25.175360 | orchestrator | 2025-09-08 00:54:25 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:25.176322 | orchestrator | 2025-09-08 00:54:25 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:25.176398 | orchestrator | 2025-09-08 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:28.219143 | orchestrator | 2025-09-08 00:54:28 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:28.220056 | orchestrator | 2025-09-08 00:54:28 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:28.220952 | orchestrator | 2025-09-08 00:54:28 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:28.220977 | orchestrator | 2025-09-08 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:31.254363 | orchestrator | 2025-09-08 00:54:31 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:31.255152 | orchestrator | 2025-09-08 00:54:31 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:31.256149 | orchestrator | 2025-09-08 00:54:31 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:31.256173 | orchestrator | 2025-09-08 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:34.309514 | orchestrator | 2025-09-08 00:54:34 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:34.311004 | orchestrator | 2025-09-08 00:54:34 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:34.313114 | orchestrator | 2025-09-08 00:54:34 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:34.313139 | orchestrator | 2025-09-08 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:37.363924 | orchestrator | 2025-09-08 00:54:37 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:37.365760 | orchestrator | 2025-09-08 00:54:37 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:37.368132 | orchestrator | 2025-09-08 00:54:37 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:37.368159 | orchestrator | 2025-09-08 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:40.413736 | orchestrator | 2025-09-08 00:54:40 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:40.416085 | orchestrator | 2025-09-08 00:54:40 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:40.418320 | orchestrator | 2025-09-08 00:54:40 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:40.418716 | orchestrator | 2025-09-08 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:43.464350 | orchestrator | 2025-09-08 00:54:43 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:43.466745 | orchestrator | 2025-09-08 00:54:43 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:43.468922 | orchestrator | 2025-09-08 00:54:43 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:43.469176 | orchestrator | 2025-09-08 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:46.510795 | orchestrator | 2025-09-08 00:54:46 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:46.511313 | orchestrator | 2025-09-08 00:54:46 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:46.512418 | orchestrator | 2025-09-08 00:54:46 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:46.512444 | orchestrator | 2025-09-08 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:49.557534 | orchestrator | 2025-09-08 00:54:49 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:49.559216 | orchestrator | 2025-09-08 00:54:49 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:49.561164 | orchestrator | 2025-09-08 00:54:49 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:49.561202 | orchestrator | 2025-09-08 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:52.610670 | orchestrator | 2025-09-08 00:54:52 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:52.611943 | orchestrator | 2025-09-08 00:54:52 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:52.613865 | orchestrator | 2025-09-08 00:54:52 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:52.613889 | orchestrator | 2025-09-08 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:55.672303 | orchestrator | 2025-09-08 00:54:55 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state STARTED 2025-09-08 00:54:55.672598 | orchestrator | 2025-09-08 00:54:55 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:55.674820 | orchestrator | 2025-09-08 00:54:55 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:55.674845 | orchestrator | 2025-09-08 00:54:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:54:58.738735 | orchestrator | 2025-09-08 00:54:58.738924 | orchestrator | 2025-09-08 00:54:58.738945 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-08 00:54:58.738958 | orchestrator | 2025-09-08 00:54:58.738969 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-08 00:54:58.738981 | orchestrator | Monday 08 September 2025 00:43:31 +0000 (0:00:00.949) 0:00:00.949 ****** 2025-09-08 00:54:58.738993 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.739004 | orchestrator | 2025-09-08 00:54:58.739016 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-08 00:54:58.739027 | orchestrator | Monday 08 September 2025 00:43:33 +0000 (0:00:01.602) 0:00:02.552 ****** 2025-09-08 00:54:58.739038 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.739050 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.739061 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.739071 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.739082 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.739093 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.739104 | orchestrator | 2025-09-08 00:54:58.739115 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-08 00:54:58.739126 | orchestrator | Monday 08 September 2025 00:43:34 +0000 (0:00:01.350) 0:00:03.902 ****** 2025-09-08 00:54:58.739137 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.739148 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.739186 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.739198 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.739209 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.739219 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.739230 | orchestrator | 2025-09-08 00:54:58.739281 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-08 00:54:58.739295 | orchestrator | Monday 08 September 2025 00:43:35 +0000 (0:00:00.799) 0:00:04.701 ****** 2025-09-08 00:54:58.739385 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.739397 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.739440 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.739454 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.739550 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.739564 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.739576 | orchestrator | 2025-09-08 00:54:58.739694 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-08 00:54:58.739708 | orchestrator | Monday 08 September 2025 00:43:36 +0000 (0:00:01.273) 0:00:05.974 ****** 2025-09-08 00:54:58.739720 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.739730 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.739741 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.739752 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.739762 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.739773 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.739784 | orchestrator | 2025-09-08 00:54:58.739795 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-08 00:54:58.739805 | orchestrator | Monday 08 September 2025 00:43:37 +0000 (0:00:00.903) 0:00:06.878 ****** 2025-09-08 00:54:58.739846 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.739858 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.739869 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.739910 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.739922 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.739932 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.739943 | orchestrator | 2025-09-08 00:54:58.739954 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-08 00:54:58.739965 | orchestrator | Monday 08 September 2025 00:43:38 +0000 (0:00:00.617) 0:00:07.496 ****** 2025-09-08 00:54:58.739976 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.739986 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.739997 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.740008 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.740018 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.740029 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.740039 | orchestrator | 2025-09-08 00:54:58.740050 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-08 00:54:58.740093 | orchestrator | Monday 08 September 2025 00:43:39 +0000 (0:00:00.991) 0:00:08.487 ****** 2025-09-08 00:54:58.740104 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.740115 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.740168 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.740180 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.740191 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.740202 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.740213 | orchestrator | 2025-09-08 00:54:58.740224 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-08 00:54:58.740235 | orchestrator | Monday 08 September 2025 00:43:40 +0000 (0:00:00.821) 0:00:09.309 ****** 2025-09-08 00:54:58.740246 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.740257 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.740267 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.740278 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.740289 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.740299 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.740310 | orchestrator | 2025-09-08 00:54:58.740321 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-08 00:54:58.740332 | orchestrator | Monday 08 September 2025 00:43:41 +0000 (0:00:00.822) 0:00:10.131 ****** 2025-09-08 00:54:58.740343 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:54:58.740354 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:54:58.740365 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:54:58.740376 | orchestrator | 2025-09-08 00:54:58.740386 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-08 00:54:58.740397 | orchestrator | Monday 08 September 2025 00:43:41 +0000 (0:00:00.692) 0:00:10.824 ****** 2025-09-08 00:54:58.740408 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.740419 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.740429 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.740449 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.740480 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.740494 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.740505 | orchestrator | 2025-09-08 00:54:58.740535 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-08 00:54:58.740547 | orchestrator | Monday 08 September 2025 00:43:42 +0000 (0:00:00.891) 0:00:11.715 ****** 2025-09-08 00:54:58.740558 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:54:58.740569 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:54:58.740580 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:54:58.740591 | orchestrator | 2025-09-08 00:54:58.740602 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-08 00:54:58.740612 | orchestrator | Monday 08 September 2025 00:43:45 +0000 (0:00:03.020) 0:00:14.736 ****** 2025-09-08 00:54:58.740623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:54:58.740634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:54:58.740645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:54:58.740656 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.740667 | orchestrator | 2025-09-08 00:54:58.740677 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-08 00:54:58.740688 | orchestrator | Monday 08 September 2025 00:43:46 +0000 (0:00:00.883) 0:00:15.619 ****** 2025-09-08 00:54:58.740708 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740721 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740733 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740744 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.740755 | orchestrator | 2025-09-08 00:54:58.740766 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-08 00:54:58.740777 | orchestrator | Monday 08 September 2025 00:43:47 +0000 (0:00:00.686) 0:00:16.305 ****** 2025-09-08 00:54:58.740790 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740804 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740815 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740833 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.740844 | orchestrator | 2025-09-08 00:54:58.740855 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-08 00:54:58.740866 | orchestrator | Monday 08 September 2025 00:43:47 +0000 (0:00:00.629) 0:00:16.935 ****** 2025-09-08 00:54:58.740879 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-08 00:43:43.212806', 'end': '2025-09-08 00:43:43.494298', 'delta': '0:00:00.281492', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740903 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-08 00:43:44.140069', 'end': '2025-09-08 00:43:44.417854', 'delta': '0:00:00.277785', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-08 00:43:45.154974', 'end': '2025-09-08 00:43:45.428237', 'delta': '0:00:00.273263', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.740933 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.740944 | orchestrator | 2025-09-08 00:54:58.740955 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-08 00:54:58.740966 | orchestrator | Monday 08 September 2025 00:43:48 +0000 (0:00:00.237) 0:00:17.173 ****** 2025-09-08 00:54:58.741116 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.741130 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.741141 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.741152 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.741162 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.741173 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.741183 | orchestrator | 2025-09-08 00:54:58.741194 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-08 00:54:58.741205 | orchestrator | Monday 08 September 2025 00:43:50 +0000 (0:00:02.173) 0:00:19.346 ****** 2025-09-08 00:54:58.741216 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.741226 | orchestrator | 2025-09-08 00:54:58.741237 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-08 00:54:58.741248 | orchestrator | Monday 08 September 2025 00:43:50 +0000 (0:00:00.661) 0:00:20.008 ****** 2025-09-08 00:54:58.741259 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.741269 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.741280 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.741291 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.741310 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.741321 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.741331 | orchestrator | 2025-09-08 00:54:58.741342 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-08 00:54:58.741353 | orchestrator | Monday 08 September 2025 00:43:53 +0000 (0:00:02.165) 0:00:22.173 ****** 2025-09-08 00:54:58.741364 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.741374 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.741385 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.741395 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.741406 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.741417 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.741428 | orchestrator | 2025-09-08 00:54:58.741438 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:54:58.741449 | orchestrator | Monday 08 September 2025 00:43:54 +0000 (0:00:01.429) 0:00:23.603 ****** 2025-09-08 00:54:58.741460 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.742314 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.742344 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.742357 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.742369 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.742380 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.742392 | orchestrator | 2025-09-08 00:54:58.742406 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-08 00:54:58.742542 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:01.621) 0:00:25.224 ****** 2025-09-08 00:54:58.742557 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.742631 | orchestrator | 2025-09-08 00:54:58.742644 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-08 00:54:58.742739 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:00.160) 0:00:25.385 ****** 2025-09-08 00:54:58.742751 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.742762 | orchestrator | 2025-09-08 00:54:58.742773 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:54:58.742784 | orchestrator | Monday 08 September 2025 00:43:56 +0000 (0:00:00.399) 0:00:25.784 ****** 2025-09-08 00:54:58.742794 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.742805 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.742816 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.742827 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.742837 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.742848 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.742859 | orchestrator | 2025-09-08 00:54:58.742870 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-08 00:54:58.742913 | orchestrator | Monday 08 September 2025 00:43:57 +0000 (0:00:01.001) 0:00:26.786 ****** 2025-09-08 00:54:58.742925 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.742936 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.742947 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.742957 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.742968 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.742979 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.742990 | orchestrator | 2025-09-08 00:54:58.743001 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-08 00:54:58.743011 | orchestrator | Monday 08 September 2025 00:43:59 +0000 (0:00:01.604) 0:00:28.390 ****** 2025-09-08 00:54:58.743052 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.743063 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.743075 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.743086 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.743097 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.743107 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.743118 | orchestrator | 2025-09-08 00:54:58.743158 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-08 00:54:58.743169 | orchestrator | Monday 08 September 2025 00:44:00 +0000 (0:00:01.023) 0:00:29.413 ****** 2025-09-08 00:54:58.743180 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.743191 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.743202 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.743212 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.743223 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.743234 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.743244 | orchestrator | 2025-09-08 00:54:58.743272 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-08 00:54:58.743283 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:00.825) 0:00:30.239 ****** 2025-09-08 00:54:58.743294 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.743305 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.743316 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.743327 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.743337 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.743348 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.743359 | orchestrator | 2025-09-08 00:54:58.743369 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-08 00:54:58.743380 | orchestrator | Monday 08 September 2025 00:44:01 +0000 (0:00:00.739) 0:00:30.979 ****** 2025-09-08 00:54:58.743391 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.743402 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.743412 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.743423 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.743434 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.743445 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.743455 | orchestrator | 2025-09-08 00:54:58.743497 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-08 00:54:58.743509 | orchestrator | Monday 08 September 2025 00:44:02 +0000 (0:00:00.836) 0:00:31.815 ****** 2025-09-08 00:54:58.743520 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.743566 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.743578 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.743589 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.743599 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.743610 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.743620 | orchestrator | 2025-09-08 00:54:58.743631 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-08 00:54:58.743642 | orchestrator | Monday 08 September 2025 00:44:03 +0000 (0:00:00.735) 0:00:32.550 ****** 2025-09-08 00:54:58.743656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part1', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part14', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part15', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part16', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.743833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.743847 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.743858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.743986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part1', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part14', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part15', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part16', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744013 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.744024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part1', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part14', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part15', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part16', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58', 'dm-uuid-LVM-ybfRSmP8aGvHZUQPpShCMnW81sVOrSC9QwPPWmQXHuy8umSXHWxMosTwNB3imKdE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db', 'dm-uuid-LVM-DCDH7v4K4rkh5TDCYsRcjSlEn4Mtwf95aX2nE1oqSx8ElBUBJTUYi4w7is09qig5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744285 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.744297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zPbkUF-43iM-f14M-elPj-0f0f-rbpN-fue70D', 'scsi-0QEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb', 'scsi-SQEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3n0S09-IFC6-Nl3O-uLeF-6Jsb-WQZn-RBM2uq', 'scsi-0QEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5', 'scsi-SQEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3', 'scsi-SQEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a', 'dm-uuid-LVM-mxP93V13tGOgpkMOcBTuQfkcNJX2UjsZS2aaa8YDcnJLK5Igyth1WabbrmtHcWMT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743', 'dm-uuid-LVM-WEqtChBOdKGBjIu5Y01mhGfsmTnLrlNqdBufqJ9YSIa2K3maj7hXtXDOt1KJOSWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744519 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.744531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2', 'dm-uuid-LVM-XTzZM3bLUDaXcirK3ZwflIcp3GvMOu5T6B1X5Wty47glqSemh8Y7qpfEJ745ZbCd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf', 'dm-uuid-LVM-1XvLKJmy5l0bje2V12wizBHeYh42P73FPgVNxtQcW1FD9Z3QWutNwTNHqe4kmMiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iXfWtL-RTU8-FkoO-Gbwb-oDS6-k7sB-9BfgEC', 'scsi-0QEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359', 'scsi-SQEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FKIkGg-R33Y-ICa0-ANyr-3sUG-8DEa-g2sTx2', 'scsi-0QEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98', 'scsi-SQEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72', 'scsi-SQEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744773 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.744784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:54:58.744866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-H51Ski-1r8N-dM8l-fA8Q-Fhgd-JN65-QyZofI', 'scsi-0QEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e', 'scsi-SQEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRiLct-XwAy-PIGL-PiHo-1I52-cd9m-kP0Os0', 'scsi-0QEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d', 'scsi-SQEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962', 'scsi-SQEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:54:58.744939 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.744950 | orchestrator | 2025-09-08 00:54:58.744962 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-08 00:54:58.744973 | orchestrator | Monday 08 September 2025 00:44:05 +0000 (0:00:01.735) 0:00:34.285 ****** 2025-09-08 00:54:58.744984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745020 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745031 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745043 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745054 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745074 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745114 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part1', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part14', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part15', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part16', 'scsi-SQEMU_QEMU_HARDDISK_770c2301-18bb-4c29-9bb9-bab8a6016772-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745134 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745146 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.745158 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745174 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745193 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745204 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745216 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745227 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745246 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745258 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part1', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part14', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part15', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part16', 'scsi-SQEMU_QEMU_HARDDISK_f986762e-5135-4807-98e0-2a6dc6746cab-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745294 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745305 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.745323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745340 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745359 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745370 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745381 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745393 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745412 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745424 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745449 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part1', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part14', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part15', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part16', 'scsi-SQEMU_QEMU_HARDDISK_ca7ccc28-8d09-4824-b2e3-b19f9e947096-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745524 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.745797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58', 'dm-uuid-LVM-ybfRSmP8aGvHZUQPpShCMnW81sVOrSC9QwPPWmQXHuy8umSXHWxMosTwNB3imKdE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db', 'dm-uuid-LVM-DCDH7v4K4rkh5TDCYsRcjSlEn4Mtwf95aX2nE1oqSx8ElBUBJTUYi4w7is09qig5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a', 'dm-uuid-LVM-mxP93V13tGOgpkMOcBTuQfkcNJX2UjsZS2aaa8YDcnJLK5Igyth1WabbrmtHcWMT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745957 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743', 'dm-uuid-LVM-WEqtChBOdKGBjIu5Y01mhGfsmTnLrlNqdBufqJ9YSIa2K3maj7hXtXDOt1KJOSWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.745998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746106 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zPbkUF-43iM-f14M-elPj-0f0f-rbpN-fue70D', 'scsi-0QEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb', 'scsi-SQEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746153 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746166 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3n0S09-IFC6-Nl3O-uLeF-6Jsb-WQZn-RBM2uq', 'scsi-0QEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5', 'scsi-SQEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746177 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3', 'scsi-SQEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746254 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.746266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746284 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746305 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2', 'dm-uuid-LVM-XTzZM3bLUDaXcirK3ZwflIcp3GvMOu5T6B1X5Wty47glqSemh8Y7qpfEJ745ZbCd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iXfWtL-RTU8-FkoO-Gbwb-oDS6-k7sB-9BfgEC', 'scsi-0QEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359', 'scsi-SQEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746334 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf', 'dm-uuid-LVM-1XvLKJmy5l0bje2V12wizBHeYh42P73FPgVNxtQcW1FD9Z3QWutNwTNHqe4kmMiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746345 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FKIkGg-R33Y-ICa0-ANyr-3sUG-8DEa-g2sTx2', 'scsi-0QEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98', 'scsi-SQEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72', 'scsi-SQEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746405 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746446 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746530 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.746553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746615 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746636 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-H51Ski-1r8N-dM8l-fA8Q-Fhgd-JN65-QyZofI', 'scsi-0QEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e', 'scsi-SQEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRiLct-XwAy-PIGL-PiHo-1I52-cd9m-kP0Os0', 'scsi-0QEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d', 'scsi-SQEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962', 'scsi-SQEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746679 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:54:58.746691 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.746702 | orchestrator | 2025-09-08 00:54:58.746714 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-08 00:54:58.746725 | orchestrator | Monday 08 September 2025 00:44:06 +0000 (0:00:01.128) 0:00:35.414 ****** 2025-09-08 00:54:58.746736 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.746749 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.746760 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.746776 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.746787 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.746798 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.746809 | orchestrator | 2025-09-08 00:54:58.746820 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-08 00:54:58.746868 | orchestrator | Monday 08 September 2025 00:44:07 +0000 (0:00:01.180) 0:00:36.594 ****** 2025-09-08 00:54:58.746881 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.746892 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.746903 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.746914 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.746924 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.746935 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.746946 | orchestrator | 2025-09-08 00:54:58.746957 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:54:58.746968 | orchestrator | Monday 08 September 2025 00:44:08 +0000 (0:00:01.038) 0:00:37.633 ****** 2025-09-08 00:54:58.746979 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.746990 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.747001 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.747012 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.747023 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.747034 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.747044 | orchestrator | 2025-09-08 00:54:58.747054 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:54:58.747064 | orchestrator | Monday 08 September 2025 00:44:09 +0000 (0:00:00.733) 0:00:38.367 ****** 2025-09-08 00:54:58.747074 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.747084 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.747094 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.747107 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.747117 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.747127 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.747136 | orchestrator | 2025-09-08 00:54:58.747146 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:54:58.747156 | orchestrator | Monday 08 September 2025 00:44:10 +0000 (0:00:00.735) 0:00:39.102 ****** 2025-09-08 00:54:58.747166 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.747175 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.747185 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.747195 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.747204 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.747214 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.747224 | orchestrator | 2025-09-08 00:54:58.747234 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:54:58.747250 | orchestrator | Monday 08 September 2025 00:44:11 +0000 (0:00:01.132) 0:00:40.235 ****** 2025-09-08 00:54:58.747260 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.747269 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.747279 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.747289 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.747299 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.747308 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.747318 | orchestrator | 2025-09-08 00:54:58.747328 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-08 00:54:58.747337 | orchestrator | Monday 08 September 2025 00:44:11 +0000 (0:00:00.780) 0:00:41.015 ****** 2025-09-08 00:54:58.747347 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:54:58.747358 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-08 00:54:58.747368 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-08 00:54:58.747377 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-08 00:54:58.747387 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-08 00:54:58.747397 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-08 00:54:58.747406 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-08 00:54:58.747416 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-08 00:54:58.747426 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-08 00:54:58.747435 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-08 00:54:58.747445 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-08 00:54:58.747454 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-08 00:54:58.747479 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-08 00:54:58.747490 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-08 00:54:58.747500 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-08 00:54:58.747509 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-08 00:54:58.747519 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-08 00:54:58.747528 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-08 00:54:58.747538 | orchestrator | 2025-09-08 00:54:58.747548 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-08 00:54:58.747557 | orchestrator | Monday 08 September 2025 00:44:15 +0000 (0:00:03.127) 0:00:44.143 ****** 2025-09-08 00:54:58.747567 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:54:58.747578 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:54:58.747587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:54:58.747597 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.747607 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-08 00:54:58.747616 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-08 00:54:58.747626 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-08 00:54:58.747636 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.747645 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-08 00:54:58.747655 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-08 00:54:58.747665 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-08 00:54:58.747675 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.747690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:54:58.747700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:54:58.747710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:54:58.747720 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.747729 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-08 00:54:58.747745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-08 00:54:58.747755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-08 00:54:58.747765 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.747774 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-08 00:54:58.747784 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-08 00:54:58.747793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-08 00:54:58.747803 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.747813 | orchestrator | 2025-09-08 00:54:58.747823 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-08 00:54:58.747832 | orchestrator | Monday 08 September 2025 00:44:15 +0000 (0:00:00.730) 0:00:44.874 ****** 2025-09-08 00:54:58.747842 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.747852 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.747862 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.747871 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.747881 | orchestrator | 2025-09-08 00:54:58.747896 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-08 00:54:58.747907 | orchestrator | Monday 08 September 2025 00:44:16 +0000 (0:00:01.159) 0:00:46.033 ****** 2025-09-08 00:54:58.747917 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.747927 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.747937 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.747946 | orchestrator | 2025-09-08 00:54:58.747956 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-08 00:54:58.747966 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:00.499) 0:00:46.533 ****** 2025-09-08 00:54:58.747975 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.747985 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.747995 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.748004 | orchestrator | 2025-09-08 00:54:58.748014 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-08 00:54:58.748023 | orchestrator | Monday 08 September 2025 00:44:17 +0000 (0:00:00.431) 0:00:46.964 ****** 2025-09-08 00:54:58.748033 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.748043 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.748053 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.748062 | orchestrator | 2025-09-08 00:54:58.748072 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-08 00:54:58.748082 | orchestrator | Monday 08 September 2025 00:44:18 +0000 (0:00:00.318) 0:00:47.283 ****** 2025-09-08 00:54:58.748092 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.748101 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.748111 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.748121 | orchestrator | 2025-09-08 00:54:58.748131 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-08 00:54:58.748140 | orchestrator | Monday 08 September 2025 00:44:19 +0000 (0:00:01.074) 0:00:48.357 ****** 2025-09-08 00:54:58.748150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.748159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.748169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.748178 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.748188 | orchestrator | 2025-09-08 00:54:58.748198 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-08 00:54:58.748208 | orchestrator | Monday 08 September 2025 00:44:19 +0000 (0:00:00.555) 0:00:48.913 ****** 2025-09-08 00:54:58.748217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.748227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.748245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.748254 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.748264 | orchestrator | 2025-09-08 00:54:58.748274 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-08 00:54:58.748284 | orchestrator | Monday 08 September 2025 00:44:20 +0000 (0:00:00.638) 0:00:49.552 ****** 2025-09-08 00:54:58.748294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.748303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.748313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.748323 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.748332 | orchestrator | 2025-09-08 00:54:58.748342 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-08 00:54:58.748352 | orchestrator | Monday 08 September 2025 00:44:20 +0000 (0:00:00.348) 0:00:49.901 ****** 2025-09-08 00:54:58.748362 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.748371 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.748381 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.748391 | orchestrator | 2025-09-08 00:54:58.748400 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-08 00:54:58.748410 | orchestrator | Monday 08 September 2025 00:44:21 +0000 (0:00:00.457) 0:00:50.359 ****** 2025-09-08 00:54:58.748420 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:54:58.748429 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-08 00:54:58.748439 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-08 00:54:58.748449 | orchestrator | 2025-09-08 00:54:58.748458 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-08 00:54:58.748482 | orchestrator | Monday 08 September 2025 00:44:22 +0000 (0:00:00.923) 0:00:51.283 ****** 2025-09-08 00:54:58.748498 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:54:58.748508 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:54:58.748518 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:54:58.748528 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-08 00:54:58.748537 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:54:58.748547 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:54:58.748557 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:54:58.748566 | orchestrator | 2025-09-08 00:54:58.748576 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-08 00:54:58.748586 | orchestrator | Monday 08 September 2025 00:44:23 +0000 (0:00:00.963) 0:00:52.246 ****** 2025-09-08 00:54:58.748595 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:54:58.748605 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:54:58.748614 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:54:58.748624 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-08 00:54:58.748638 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:54:58.748648 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:54:58.748658 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:54:58.748667 | orchestrator | 2025-09-08 00:54:58.748677 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:54:58.748687 | orchestrator | Monday 08 September 2025 00:44:25 +0000 (0:00:02.153) 0:00:54.399 ****** 2025-09-08 00:54:58.748697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.748717 | orchestrator | 2025-09-08 00:54:58.748726 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:54:58.748736 | orchestrator | Monday 08 September 2025 00:44:26 +0000 (0:00:01.270) 0:00:55.670 ****** 2025-09-08 00:54:58.748746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.748756 | orchestrator | 2025-09-08 00:54:58.748766 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:54:58.748775 | orchestrator | Monday 08 September 2025 00:44:27 +0000 (0:00:01.221) 0:00:56.892 ****** 2025-09-08 00:54:58.748785 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.748795 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.748804 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.748814 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.748823 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.748833 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.748843 | orchestrator | 2025-09-08 00:54:58.748852 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:54:58.748862 | orchestrator | Monday 08 September 2025 00:44:28 +0000 (0:00:01.063) 0:00:57.956 ****** 2025-09-08 00:54:58.748872 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.748881 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.748891 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.748901 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.748911 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.748920 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.748930 | orchestrator | 2025-09-08 00:54:58.748939 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:54:58.748949 | orchestrator | Monday 08 September 2025 00:44:30 +0000 (0:00:01.209) 0:00:59.165 ****** 2025-09-08 00:54:58.748959 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.748968 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.748978 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.748988 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.748997 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.749007 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.749016 | orchestrator | 2025-09-08 00:54:58.749026 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:54:58.749036 | orchestrator | Monday 08 September 2025 00:44:31 +0000 (0:00:01.555) 0:01:00.721 ****** 2025-09-08 00:54:58.749046 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.749055 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.749065 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.749075 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.749084 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.749094 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.749103 | orchestrator | 2025-09-08 00:54:58.749113 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:54:58.749123 | orchestrator | Monday 08 September 2025 00:44:32 +0000 (0:00:01.195) 0:01:01.916 ****** 2025-09-08 00:54:58.749133 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.749142 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.749152 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.749162 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.749171 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.749181 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.749190 | orchestrator | 2025-09-08 00:54:58.749200 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:54:58.749210 | orchestrator | Monday 08 September 2025 00:44:34 +0000 (0:00:01.141) 0:01:03.057 ****** 2025-09-08 00:54:58.749224 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.749234 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.749251 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.749261 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.749270 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.749280 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.749289 | orchestrator | 2025-09-08 00:54:58.749299 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:54:58.749309 | orchestrator | Monday 08 September 2025 00:44:35 +0000 (0:00:01.087) 0:01:04.144 ****** 2025-09-08 00:54:58.749318 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.749328 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.749338 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.749347 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.749357 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.749366 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.749376 | orchestrator | 2025-09-08 00:54:58.749386 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:54:58.749396 | orchestrator | Monday 08 September 2025 00:44:36 +0000 (0:00:01.085) 0:01:05.230 ****** 2025-09-08 00:54:58.749405 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.749415 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.749425 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.749434 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.749444 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.749454 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.749505 | orchestrator | 2025-09-08 00:54:58.749516 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:54:58.749526 | orchestrator | Monday 08 September 2025 00:44:37 +0000 (0:00:01.478) 0:01:06.708 ****** 2025-09-08 00:54:58.749541 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.749551 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.749560 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.749570 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.749579 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.749588 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.749598 | orchestrator | 2025-09-08 00:54:58.749608 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:54:58.749617 | orchestrator | Monday 08 September 2025 00:44:39 +0000 (0:00:01.928) 0:01:08.636 ****** 2025-09-08 00:54:58.749627 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.749636 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.749646 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.749655 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.749665 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.749674 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.749684 | orchestrator | 2025-09-08 00:54:58.749693 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:54:58.749703 | orchestrator | Monday 08 September 2025 00:44:40 +0000 (0:00:00.604) 0:01:09.241 ****** 2025-09-08 00:54:58.749713 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.749722 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.749732 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.749741 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.749751 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.749760 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.749770 | orchestrator | 2025-09-08 00:54:58.749779 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:54:58.749789 | orchestrator | Monday 08 September 2025 00:44:41 +0000 (0:00:00.869) 0:01:10.111 ****** 2025-09-08 00:54:58.749799 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.749808 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.749818 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.749827 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.749837 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.749847 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.749863 | orchestrator | 2025-09-08 00:54:58.749873 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:54:58.749881 | orchestrator | Monday 08 September 2025 00:44:41 +0000 (0:00:00.663) 0:01:10.775 ****** 2025-09-08 00:54:58.749889 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.749897 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.749904 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.749912 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.749920 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.749928 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.749935 | orchestrator | 2025-09-08 00:54:58.749943 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:54:58.749951 | orchestrator | Monday 08 September 2025 00:44:42 +0000 (0:00:01.035) 0:01:11.810 ****** 2025-09-08 00:54:58.749959 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.749967 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.749975 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.749982 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.749990 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.749998 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.750006 | orchestrator | 2025-09-08 00:54:58.750239 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:54:58.750366 | orchestrator | Monday 08 September 2025 00:44:43 +0000 (0:00:00.751) 0:01:12.562 ****** 2025-09-08 00:54:58.750384 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.750398 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.750410 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.750421 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.750432 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.750443 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.750454 | orchestrator | 2025-09-08 00:54:58.750501 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:54:58.750513 | orchestrator | Monday 08 September 2025 00:44:45 +0000 (0:00:01.701) 0:01:14.264 ****** 2025-09-08 00:54:58.750524 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.750535 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.750546 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.750557 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.750568 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.750579 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.750589 | orchestrator | 2025-09-08 00:54:58.750601 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:54:58.750649 | orchestrator | Monday 08 September 2025 00:44:45 +0000 (0:00:00.782) 0:01:15.046 ****** 2025-09-08 00:54:58.750661 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.750673 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.750684 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.750695 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.750705 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.750716 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.750727 | orchestrator | 2025-09-08 00:54:58.750738 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:54:58.750749 | orchestrator | Monday 08 September 2025 00:44:47 +0000 (0:00:01.133) 0:01:16.180 ****** 2025-09-08 00:54:58.750760 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.750771 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.750781 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.750792 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.750802 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.750812 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.750823 | orchestrator | 2025-09-08 00:54:58.750834 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:54:58.750844 | orchestrator | Monday 08 September 2025 00:44:48 +0000 (0:00:01.115) 0:01:17.296 ****** 2025-09-08 00:54:58.750886 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.750897 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.750907 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.750918 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.750928 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.750939 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.750950 | orchestrator | 2025-09-08 00:54:58.750960 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-08 00:54:58.750971 | orchestrator | Monday 08 September 2025 00:44:50 +0000 (0:00:02.000) 0:01:19.297 ****** 2025-09-08 00:54:58.750996 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.751007 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.751018 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.751028 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.751039 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.751050 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.751060 | orchestrator | 2025-09-08 00:54:58.751071 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-08 00:54:58.751082 | orchestrator | Monday 08 September 2025 00:44:53 +0000 (0:00:03.188) 0:01:22.485 ****** 2025-09-08 00:54:58.751092 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.751103 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.751114 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.751125 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.751135 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.751146 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.751157 | orchestrator | 2025-09-08 00:54:58.751168 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-08 00:54:58.751178 | orchestrator | Monday 08 September 2025 00:44:56 +0000 (0:00:03.196) 0:01:25.682 ****** 2025-09-08 00:54:58.751191 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.751204 | orchestrator | 2025-09-08 00:54:58.751215 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-08 00:54:58.751226 | orchestrator | Monday 08 September 2025 00:44:58 +0000 (0:00:01.629) 0:01:27.311 ****** 2025-09-08 00:54:58.751237 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.751248 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.751259 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.751269 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.751280 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.751291 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.751301 | orchestrator | 2025-09-08 00:54:58.751312 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-08 00:54:58.751323 | orchestrator | Monday 08 September 2025 00:44:59 +0000 (0:00:01.068) 0:01:28.379 ****** 2025-09-08 00:54:58.751334 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.751345 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.751355 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.751366 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.751376 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.751387 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.751398 | orchestrator | 2025-09-08 00:54:58.751409 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-08 00:54:58.751420 | orchestrator | Monday 08 September 2025 00:45:00 +0000 (0:00:00.789) 0:01:29.168 ****** 2025-09-08 00:54:58.751430 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:54:58.751441 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:54:58.751452 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:54:58.751484 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:54:58.751508 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:54:58.751520 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:54:58.751531 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:54:58.751542 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:54:58.751553 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:54:58.751563 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-08 00:54:58.751574 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:54:58.751585 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-08 00:54:58.751596 | orchestrator | 2025-09-08 00:54:58.751618 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-08 00:54:58.751629 | orchestrator | Monday 08 September 2025 00:45:01 +0000 (0:00:01.856) 0:01:31.024 ****** 2025-09-08 00:54:58.751640 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.751651 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.751662 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.751672 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.751683 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.751694 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.751705 | orchestrator | 2025-09-08 00:54:58.751716 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-08 00:54:58.751727 | orchestrator | Monday 08 September 2025 00:45:02 +0000 (0:00:01.023) 0:01:32.048 ****** 2025-09-08 00:54:58.751737 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.751748 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.751759 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.751770 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.751781 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.751792 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.751803 | orchestrator | 2025-09-08 00:54:58.751814 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-08 00:54:58.751824 | orchestrator | Monday 08 September 2025 00:45:03 +0000 (0:00:00.972) 0:01:33.021 ****** 2025-09-08 00:54:58.751835 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.751846 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.751857 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.751867 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.751878 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.751894 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.751906 | orchestrator | 2025-09-08 00:54:58.751916 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-08 00:54:58.751927 | orchestrator | Monday 08 September 2025 00:45:04 +0000 (0:00:00.620) 0:01:33.641 ****** 2025-09-08 00:54:58.751938 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.751949 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.751960 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.751970 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.751981 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.751992 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.752003 | orchestrator | 2025-09-08 00:54:58.752014 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-08 00:54:58.752024 | orchestrator | Monday 08 September 2025 00:45:05 +0000 (0:00:00.825) 0:01:34.466 ****** 2025-09-08 00:54:58.752036 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.752047 | orchestrator | 2025-09-08 00:54:58.752058 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-08 00:54:58.752077 | orchestrator | Monday 08 September 2025 00:45:06 +0000 (0:00:01.299) 0:01:35.766 ****** 2025-09-08 00:54:58.752088 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.752099 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.752110 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.752121 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.752132 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.752143 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.752153 | orchestrator | 2025-09-08 00:54:58.752165 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-08 00:54:58.752176 | orchestrator | Monday 08 September 2025 00:46:15 +0000 (0:01:09.009) 0:02:44.775 ****** 2025-09-08 00:54:58.752187 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:54:58.752197 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:54:58.752208 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:54:58.752219 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.752230 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:54:58.752241 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:54:58.752252 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:54:58.752262 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.752273 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:54:58.752284 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:54:58.752295 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:54:58.752306 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.752317 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:54:58.752328 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:54:58.752339 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:54:58.752349 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:54:58.752360 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:54:58.752371 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:54:58.752382 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.752393 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.752404 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-08 00:54:58.752414 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-08 00:54:58.752425 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-08 00:54:58.752442 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.752454 | orchestrator | 2025-09-08 00:54:58.752482 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-08 00:54:58.752493 | orchestrator | Monday 08 September 2025 00:46:16 +0000 (0:00:00.847) 0:02:45.623 ****** 2025-09-08 00:54:58.752504 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.752514 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.752525 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.752536 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.752547 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.752558 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.752568 | orchestrator | 2025-09-08 00:54:58.752579 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-08 00:54:58.752590 | orchestrator | Monday 08 September 2025 00:46:17 +0000 (0:00:00.750) 0:02:46.373 ****** 2025-09-08 00:54:58.752608 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.752619 | orchestrator | 2025-09-08 00:54:58.752630 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-08 00:54:58.752641 | orchestrator | Monday 08 September 2025 00:46:17 +0000 (0:00:00.179) 0:02:46.553 ****** 2025-09-08 00:54:58.752651 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.752662 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.752673 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.752684 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.752695 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.752705 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.752716 | orchestrator | 2025-09-08 00:54:58.752727 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-08 00:54:58.752743 | orchestrator | Monday 08 September 2025 00:46:18 +0000 (0:00:01.180) 0:02:47.733 ****** 2025-09-08 00:54:58.752754 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.752765 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.752776 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.752787 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.752797 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.752808 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.752819 | orchestrator | 2025-09-08 00:54:58.752829 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-08 00:54:58.752840 | orchestrator | Monday 08 September 2025 00:46:19 +0000 (0:00:00.743) 0:02:48.477 ****** 2025-09-08 00:54:58.752851 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.752862 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.752873 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.752884 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.752894 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.752905 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.752916 | orchestrator | 2025-09-08 00:54:58.752927 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-08 00:54:58.752938 | orchestrator | Monday 08 September 2025 00:46:20 +0000 (0:00:01.150) 0:02:49.627 ****** 2025-09-08 00:54:58.752948 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.752959 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.752970 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.752981 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.752992 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.753002 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.753013 | orchestrator | 2025-09-08 00:54:58.753024 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-08 00:54:58.753035 | orchestrator | Monday 08 September 2025 00:46:23 +0000 (0:00:02.693) 0:02:52.321 ****** 2025-09-08 00:54:58.753046 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.753056 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.753067 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.753078 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.753089 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.753099 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.753110 | orchestrator | 2025-09-08 00:54:58.753121 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-08 00:54:58.753132 | orchestrator | Monday 08 September 2025 00:46:24 +0000 (0:00:01.023) 0:02:53.345 ****** 2025-09-08 00:54:58.753143 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.753156 | orchestrator | 2025-09-08 00:54:58.753167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-08 00:54:58.753178 | orchestrator | Monday 08 September 2025 00:46:25 +0000 (0:00:01.554) 0:02:54.899 ****** 2025-09-08 00:54:58.753188 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753199 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753216 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753227 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753237 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753248 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753259 | orchestrator | 2025-09-08 00:54:58.753270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-08 00:54:58.753281 | orchestrator | Monday 08 September 2025 00:46:26 +0000 (0:00:00.913) 0:02:55.813 ****** 2025-09-08 00:54:58.753292 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753302 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753313 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753324 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753335 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753345 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753356 | orchestrator | 2025-09-08 00:54:58.753367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-08 00:54:58.753378 | orchestrator | Monday 08 September 2025 00:46:27 +0000 (0:00:00.949) 0:02:56.762 ****** 2025-09-08 00:54:58.753389 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753399 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753410 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753421 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753432 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753443 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753453 | orchestrator | 2025-09-08 00:54:58.753481 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-08 00:54:58.753500 | orchestrator | Monday 08 September 2025 00:46:28 +0000 (0:00:00.815) 0:02:57.578 ****** 2025-09-08 00:54:58.753511 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753522 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753532 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753543 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753554 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753565 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753576 | orchestrator | 2025-09-08 00:54:58.753587 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-08 00:54:58.753597 | orchestrator | Monday 08 September 2025 00:46:29 +0000 (0:00:00.862) 0:02:58.441 ****** 2025-09-08 00:54:58.753608 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753619 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753630 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753641 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753651 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753662 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753673 | orchestrator | 2025-09-08 00:54:58.753684 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-08 00:54:58.753694 | orchestrator | Monday 08 September 2025 00:46:30 +0000 (0:00:00.743) 0:02:59.185 ****** 2025-09-08 00:54:58.753705 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753716 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753727 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753738 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753748 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753759 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753770 | orchestrator | 2025-09-08 00:54:58.753786 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-08 00:54:58.753797 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:00:01.051) 0:03:00.236 ****** 2025-09-08 00:54:58.753807 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753818 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753829 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753840 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753857 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753868 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753878 | orchestrator | 2025-09-08 00:54:58.753889 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-08 00:54:58.753900 | orchestrator | Monday 08 September 2025 00:46:31 +0000 (0:00:00.696) 0:03:00.933 ****** 2025-09-08 00:54:58.753911 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.753922 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.753932 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.753943 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.753954 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.753964 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.753975 | orchestrator | 2025-09-08 00:54:58.753986 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-08 00:54:58.753997 | orchestrator | Monday 08 September 2025 00:46:32 +0000 (0:00:00.860) 0:03:01.794 ****** 2025-09-08 00:54:58.754008 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.754052 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.754065 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.754077 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.754088 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.754099 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.754110 | orchestrator | 2025-09-08 00:54:58.754120 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-08 00:54:58.754131 | orchestrator | Monday 08 September 2025 00:46:34 +0000 (0:00:01.497) 0:03:03.291 ****** 2025-09-08 00:54:58.754143 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.754154 | orchestrator | 2025-09-08 00:54:58.754164 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-08 00:54:58.754175 | orchestrator | Monday 08 September 2025 00:46:35 +0000 (0:00:01.222) 0:03:04.514 ****** 2025-09-08 00:54:58.754186 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-08 00:54:58.754198 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-08 00:54:58.754208 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-08 00:54:58.754219 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-08 00:54:58.754230 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-08 00:54:58.754241 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-08 00:54:58.754251 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-08 00:54:58.754262 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-08 00:54:58.754273 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-08 00:54:58.754283 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-08 00:54:58.754294 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-08 00:54:58.754305 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-08 00:54:58.754315 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-08 00:54:58.754326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-08 00:54:58.754337 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-08 00:54:58.754348 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-08 00:54:58.754359 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-08 00:54:58.754369 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-08 00:54:58.754380 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-08 00:54:58.754391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-08 00:54:58.754402 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-08 00:54:58.754429 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-08 00:54:58.754447 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-08 00:54:58.754458 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-08 00:54:58.754486 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-08 00:54:58.754497 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-08 00:54:58.754508 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-08 00:54:58.754519 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-08 00:54:58.754529 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-08 00:54:58.754540 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-08 00:54:58.754551 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-08 00:54:58.754561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-08 00:54:58.754572 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-08 00:54:58.754583 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-08 00:54:58.754594 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-08 00:54:58.754605 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-08 00:54:58.754615 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-08 00:54:58.754626 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-08 00:54:58.754637 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:54:58.754654 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-08 00:54:58.754665 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-08 00:54:58.754676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:54:58.754687 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-08 00:54:58.754698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:54:58.754709 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:54:58.754720 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:54:58.754730 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:54:58.754741 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:54:58.754752 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-08 00:54:58.754762 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:54:58.754773 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:54:58.754784 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:54:58.754794 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:54:58.754805 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-08 00:54:58.754816 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:54:58.754826 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:54:58.754837 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:54:58.754848 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:54:58.754858 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:54:58.754869 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-08 00:54:58.754880 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:54:58.754890 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:54:58.754901 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:54:58.754912 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:54:58.754929 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:54:58.754940 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-08 00:54:58.754950 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:54:58.754961 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:54:58.754971 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:54:58.754982 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:54:58.754993 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:54:58.755004 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:54:58.755014 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-08 00:54:58.755025 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:54:58.755036 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:54:58.755047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:54:58.755058 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:54:58.755068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:54:58.755079 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-08 00:54:58.755096 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:54:58.755108 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:54:58.755119 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-08 00:54:58.755130 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-08 00:54:58.755141 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-08 00:54:58.755152 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:54:58.755162 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-08 00:54:58.755173 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-08 00:54:58.755184 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-08 00:54:58.755195 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-08 00:54:58.755206 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-08 00:54:58.755217 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-08 00:54:58.755228 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-08 00:54:58.755238 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-08 00:54:58.755249 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-08 00:54:58.755260 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-08 00:54:58.755271 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-08 00:54:58.755281 | orchestrator | 2025-09-08 00:54:58.755297 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-08 00:54:58.755308 | orchestrator | Monday 08 September 2025 00:46:42 +0000 (0:00:06.673) 0:03:11.187 ****** 2025-09-08 00:54:58.755319 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.755330 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.755341 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.755352 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.755363 | orchestrator | 2025-09-08 00:54:58.755374 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-08 00:54:58.755385 | orchestrator | Monday 08 September 2025 00:46:43 +0000 (0:00:01.251) 0:03:12.439 ****** 2025-09-08 00:54:58.755408 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.755419 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.755431 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.755441 | orchestrator | 2025-09-08 00:54:58.755452 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-08 00:54:58.755488 | orchestrator | Monday 08 September 2025 00:46:44 +0000 (0:00:00.852) 0:03:13.292 ****** 2025-09-08 00:54:58.755500 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.755511 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.755522 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.755533 | orchestrator | 2025-09-08 00:54:58.755544 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-08 00:54:58.755555 | orchestrator | Monday 08 September 2025 00:46:45 +0000 (0:00:01.366) 0:03:14.658 ****** 2025-09-08 00:54:58.755566 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.755577 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.755587 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.755598 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.755609 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.755620 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.755631 | orchestrator | 2025-09-08 00:54:58.755642 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-08 00:54:58.755653 | orchestrator | Monday 08 September 2025 00:46:46 +0000 (0:00:01.225) 0:03:15.884 ****** 2025-09-08 00:54:58.755664 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.755675 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.755686 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.755697 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.755708 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.755718 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.755729 | orchestrator | 2025-09-08 00:54:58.755740 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-08 00:54:58.755751 | orchestrator | Monday 08 September 2025 00:46:47 +0000 (0:00:00.575) 0:03:16.460 ****** 2025-09-08 00:54:58.755762 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.755773 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.755784 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.755795 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.755805 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.755816 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.755827 | orchestrator | 2025-09-08 00:54:58.755838 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-08 00:54:58.755848 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:00.705) 0:03:17.166 ****** 2025-09-08 00:54:58.755859 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.755870 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.755887 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.755899 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.755910 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.755920 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.755931 | orchestrator | 2025-09-08 00:54:58.755942 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-08 00:54:58.755953 | orchestrator | Monday 08 September 2025 00:46:48 +0000 (0:00:00.631) 0:03:17.797 ****** 2025-09-08 00:54:58.755971 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.755982 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.755993 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756003 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.756014 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.756025 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.756036 | orchestrator | 2025-09-08 00:54:58.756047 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-08 00:54:58.756058 | orchestrator | Monday 08 September 2025 00:46:49 +0000 (0:00:00.938) 0:03:18.736 ****** 2025-09-08 00:54:58.756068 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756079 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756090 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756101 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.756111 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.756122 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.756133 | orchestrator | 2025-09-08 00:54:58.756144 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-08 00:54:58.756155 | orchestrator | Monday 08 September 2025 00:46:50 +0000 (0:00:00.683) 0:03:19.419 ****** 2025-09-08 00:54:58.756171 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756182 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756192 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756203 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.756213 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.756224 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.756235 | orchestrator | 2025-09-08 00:54:58.756246 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-08 00:54:58.756257 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.918) 0:03:20.338 ****** 2025-09-08 00:54:58.756268 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756278 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756289 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756300 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.756310 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.756321 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.756332 | orchestrator | 2025-09-08 00:54:58.756343 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-08 00:54:58.756354 | orchestrator | Monday 08 September 2025 00:46:51 +0000 (0:00:00.580) 0:03:20.918 ****** 2025-09-08 00:54:58.756365 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756375 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756386 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756397 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.756408 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.756418 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.756429 | orchestrator | 2025-09-08 00:54:58.756440 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-08 00:54:58.756451 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:03.142) 0:03:24.061 ****** 2025-09-08 00:54:58.756508 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756521 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756532 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756543 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.756554 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.756564 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.756575 | orchestrator | 2025-09-08 00:54:58.756586 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-08 00:54:58.756597 | orchestrator | Monday 08 September 2025 00:46:55 +0000 (0:00:00.695) 0:03:24.756 ****** 2025-09-08 00:54:58.756608 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756619 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756637 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756647 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.756657 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.756666 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.756676 | orchestrator | 2025-09-08 00:54:58.756685 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-08 00:54:58.756695 | orchestrator | Monday 08 September 2025 00:46:56 +0000 (0:00:00.902) 0:03:25.658 ****** 2025-09-08 00:54:58.756705 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756724 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756733 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.756743 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.756752 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.756762 | orchestrator | 2025-09-08 00:54:58.756771 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-08 00:54:58.756781 | orchestrator | Monday 08 September 2025 00:46:57 +0000 (0:00:00.851) 0:03:26.510 ****** 2025-09-08 00:54:58.756791 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756800 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756810 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756819 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.756830 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.756839 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.756849 | orchestrator | 2025-09-08 00:54:58.756859 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-08 00:54:58.756875 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.834) 0:03:27.345 ****** 2025-09-08 00:54:58.756885 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.756894 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.756904 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.756916 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-08 00:54:58.756930 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-08 00:54:58.756941 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-08 00:54:58.756956 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.756966 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-08 00:54:58.756976 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.756986 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-08 00:54:58.757003 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-08 00:54:58.757012 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.757022 | orchestrator | 2025-09-08 00:54:58.757032 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-08 00:54:58.757042 | orchestrator | Monday 08 September 2025 00:46:58 +0000 (0:00:00.637) 0:03:27.982 ****** 2025-09-08 00:54:58.757051 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757061 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757070 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757080 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.757089 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.757098 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.757108 | orchestrator | 2025-09-08 00:54:58.757118 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-08 00:54:58.757127 | orchestrator | Monday 08 September 2025 00:46:59 +0000 (0:00:00.695) 0:03:28.678 ****** 2025-09-08 00:54:58.757137 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757147 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757156 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757165 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.757175 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.757184 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.757194 | orchestrator | 2025-09-08 00:54:58.757204 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-08 00:54:58.757213 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:00.592) 0:03:29.270 ****** 2025-09-08 00:54:58.757223 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757232 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757242 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757252 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.757261 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.757270 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.757280 | orchestrator | 2025-09-08 00:54:58.757290 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-08 00:54:58.757299 | orchestrator | Monday 08 September 2025 00:47:00 +0000 (0:00:00.692) 0:03:29.962 ****** 2025-09-08 00:54:58.757309 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757318 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757328 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757337 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.757347 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.757356 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.757366 | orchestrator | 2025-09-08 00:54:58.757375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-08 00:54:58.757385 | orchestrator | Monday 08 September 2025 00:47:01 +0000 (0:00:00.685) 0:03:30.648 ****** 2025-09-08 00:54:58.757394 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757404 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757413 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757428 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.757438 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.757448 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.757457 | orchestrator | 2025-09-08 00:54:58.757483 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-08 00:54:58.757493 | orchestrator | Monday 08 September 2025 00:47:02 +0000 (0:00:00.863) 0:03:31.512 ****** 2025-09-08 00:54:58.757502 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757519 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757529 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757538 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.757548 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.757558 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.757567 | orchestrator | 2025-09-08 00:54:58.757577 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-08 00:54:58.757587 | orchestrator | Monday 08 September 2025 00:47:03 +0000 (0:00:01.126) 0:03:32.639 ****** 2025-09-08 00:54:58.757597 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-08 00:54:58.757606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-08 00:54:58.757616 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-08 00:54:58.757626 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757635 | orchestrator | 2025-09-08 00:54:58.757645 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-08 00:54:58.757655 | orchestrator | Monday 08 September 2025 00:47:04 +0000 (0:00:00.796) 0:03:33.435 ****** 2025-09-08 00:54:58.757665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-08 00:54:58.757679 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-08 00:54:58.757689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-08 00:54:58.757699 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757708 | orchestrator | 2025-09-08 00:54:58.757718 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-08 00:54:58.757727 | orchestrator | Monday 08 September 2025 00:47:05 +0000 (0:00:00.797) 0:03:34.233 ****** 2025-09-08 00:54:58.757737 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-08 00:54:58.757747 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-08 00:54:58.757756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-08 00:54:58.757766 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757775 | orchestrator | 2025-09-08 00:54:58.757785 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-08 00:54:58.757795 | orchestrator | Monday 08 September 2025 00:47:06 +0000 (0:00:01.113) 0:03:35.346 ****** 2025-09-08 00:54:58.757805 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757814 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757824 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757833 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.757843 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.757853 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.757862 | orchestrator | 2025-09-08 00:54:58.757872 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-08 00:54:58.757882 | orchestrator | Monday 08 September 2025 00:47:07 +0000 (0:00:01.108) 0:03:36.455 ****** 2025-09-08 00:54:58.757891 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-08 00:54:58.757901 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-08 00:54:58.757910 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.757920 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.757929 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-08 00:54:58.757939 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.757949 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:54:58.757958 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-08 00:54:58.757968 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-08 00:54:58.757977 | orchestrator | 2025-09-08 00:54:58.757987 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-08 00:54:58.757997 | orchestrator | Monday 08 September 2025 00:47:11 +0000 (0:00:03.732) 0:03:40.188 ****** 2025-09-08 00:54:58.758007 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.758054 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.758076 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.758104 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.758122 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.758139 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.758156 | orchestrator | 2025-09-08 00:54:58.758173 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:54:58.758183 | orchestrator | Monday 08 September 2025 00:47:16 +0000 (0:00:05.029) 0:03:45.217 ****** 2025-09-08 00:54:58.758192 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.758202 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.758212 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.758221 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.758231 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.758240 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.758250 | orchestrator | 2025-09-08 00:54:58.758259 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-08 00:54:58.758269 | orchestrator | Monday 08 September 2025 00:47:17 +0000 (0:00:01.502) 0:03:46.719 ****** 2025-09-08 00:54:58.758279 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.758288 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.758298 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.758308 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.758319 | orchestrator | 2025-09-08 00:54:58.758328 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-08 00:54:58.758338 | orchestrator | Monday 08 September 2025 00:47:18 +0000 (0:00:01.154) 0:03:47.874 ****** 2025-09-08 00:54:58.758348 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.758357 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.758367 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.758377 | orchestrator | 2025-09-08 00:54:58.758386 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-08 00:54:58.758411 | orchestrator | Monday 08 September 2025 00:47:19 +0000 (0:00:00.329) 0:03:48.204 ****** 2025-09-08 00:54:58.758421 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.758430 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.758440 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.758449 | orchestrator | 2025-09-08 00:54:58.758459 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-08 00:54:58.758485 | orchestrator | Monday 08 September 2025 00:47:20 +0000 (0:00:01.448) 0:03:49.652 ****** 2025-09-08 00:54:58.758495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:54:58.758504 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:54:58.758514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:54:58.758524 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.758533 | orchestrator | 2025-09-08 00:54:58.758543 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-08 00:54:58.758552 | orchestrator | Monday 08 September 2025 00:47:21 +0000 (0:00:00.764) 0:03:50.417 ****** 2025-09-08 00:54:58.758562 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.758572 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.758581 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.758591 | orchestrator | 2025-09-08 00:54:58.758601 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-08 00:54:58.758610 | orchestrator | Monday 08 September 2025 00:47:21 +0000 (0:00:00.431) 0:03:50.849 ****** 2025-09-08 00:54:58.758620 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.758630 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.758639 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.758654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.758664 | orchestrator | 2025-09-08 00:54:58.758674 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-08 00:54:58.758694 | orchestrator | Monday 08 September 2025 00:47:22 +0000 (0:00:00.840) 0:03:51.689 ****** 2025-09-08 00:54:58.758704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.758713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.758723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.758732 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.758742 | orchestrator | 2025-09-08 00:54:58.758751 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-08 00:54:58.758761 | orchestrator | Monday 08 September 2025 00:47:23 +0000 (0:00:00.500) 0:03:52.189 ****** 2025-09-08 00:54:58.758770 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.758780 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.758789 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.758799 | orchestrator | 2025-09-08 00:54:58.758809 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-08 00:54:58.758819 | orchestrator | Monday 08 September 2025 00:47:23 +0000 (0:00:00.452) 0:03:52.641 ****** 2025-09-08 00:54:58.758828 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.758837 | orchestrator | 2025-09-08 00:54:58.758847 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-08 00:54:58.758856 | orchestrator | Monday 08 September 2025 00:47:23 +0000 (0:00:00.186) 0:03:52.828 ****** 2025-09-08 00:54:58.758866 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.758875 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.758885 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.758894 | orchestrator | 2025-09-08 00:54:58.758904 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-08 00:54:58.758913 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:00.323) 0:03:53.151 ****** 2025-09-08 00:54:58.758923 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.758932 | orchestrator | 2025-09-08 00:54:58.758942 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-08 00:54:58.758951 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:00.190) 0:03:53.341 ****** 2025-09-08 00:54:58.758961 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.758970 | orchestrator | 2025-09-08 00:54:58.758979 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-08 00:54:58.758989 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:00.259) 0:03:53.601 ****** 2025-09-08 00:54:58.758999 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759008 | orchestrator | 2025-09-08 00:54:58.759018 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-08 00:54:58.759027 | orchestrator | Monday 08 September 2025 00:47:24 +0000 (0:00:00.313) 0:03:53.915 ****** 2025-09-08 00:54:58.759036 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759046 | orchestrator | 2025-09-08 00:54:58.759055 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-08 00:54:58.759065 | orchestrator | Monday 08 September 2025 00:47:25 +0000 (0:00:00.456) 0:03:54.372 ****** 2025-09-08 00:54:58.759074 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759084 | orchestrator | 2025-09-08 00:54:58.759093 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-08 00:54:58.759103 | orchestrator | Monday 08 September 2025 00:47:25 +0000 (0:00:00.662) 0:03:55.034 ****** 2025-09-08 00:54:58.759112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.759122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.759131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.759141 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759150 | orchestrator | 2025-09-08 00:54:58.759160 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-08 00:54:58.759169 | orchestrator | Monday 08 September 2025 00:47:26 +0000 (0:00:00.578) 0:03:55.612 ****** 2025-09-08 00:54:58.759185 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759195 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.759204 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.759214 | orchestrator | 2025-09-08 00:54:58.759229 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-08 00:54:58.759239 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.626) 0:03:56.239 ****** 2025-09-08 00:54:58.759249 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759258 | orchestrator | 2025-09-08 00:54:58.759268 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-08 00:54:58.759277 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.232) 0:03:56.472 ****** 2025-09-08 00:54:58.759287 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759296 | orchestrator | 2025-09-08 00:54:58.759306 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-08 00:54:58.759315 | orchestrator | Monday 08 September 2025 00:47:27 +0000 (0:00:00.207) 0:03:56.679 ****** 2025-09-08 00:54:58.759325 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.759334 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.759344 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.759353 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.759363 | orchestrator | 2025-09-08 00:54:58.759372 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-08 00:54:58.759382 | orchestrator | Monday 08 September 2025 00:47:28 +0000 (0:00:01.137) 0:03:57.817 ****** 2025-09-08 00:54:58.759392 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.759401 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.759411 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.759420 | orchestrator | 2025-09-08 00:54:58.759429 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-08 00:54:58.759443 | orchestrator | Monday 08 September 2025 00:47:29 +0000 (0:00:00.373) 0:03:58.190 ****** 2025-09-08 00:54:58.759453 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.759507 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.759519 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.759528 | orchestrator | 2025-09-08 00:54:58.759538 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-08 00:54:58.759547 | orchestrator | Monday 08 September 2025 00:47:30 +0000 (0:00:01.254) 0:03:59.445 ****** 2025-09-08 00:54:58.759557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.759567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.759576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.759586 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759595 | orchestrator | 2025-09-08 00:54:58.759605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-08 00:54:58.759614 | orchestrator | Monday 08 September 2025 00:47:31 +0000 (0:00:00.782) 0:04:00.228 ****** 2025-09-08 00:54:58.759624 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.759633 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.759643 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.759652 | orchestrator | 2025-09-08 00:54:58.759662 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-08 00:54:58.759670 | orchestrator | Monday 08 September 2025 00:47:31 +0000 (0:00:00.549) 0:04:00.778 ****** 2025-09-08 00:54:58.759678 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.759686 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.759694 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.759701 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.759709 | orchestrator | 2025-09-08 00:54:58.759717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-08 00:54:58.759731 | orchestrator | Monday 08 September 2025 00:47:32 +0000 (0:00:01.072) 0:04:01.850 ****** 2025-09-08 00:54:58.759739 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.759747 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.759754 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.759762 | orchestrator | 2025-09-08 00:54:58.759770 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-08 00:54:58.759778 | orchestrator | Monday 08 September 2025 00:47:33 +0000 (0:00:00.326) 0:04:02.176 ****** 2025-09-08 00:54:58.759786 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.759793 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.759801 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.759809 | orchestrator | 2025-09-08 00:54:58.759816 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-08 00:54:58.759824 | orchestrator | Monday 08 September 2025 00:47:34 +0000 (0:00:01.472) 0:04:03.649 ****** 2025-09-08 00:54:58.759832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.759840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.759847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.759855 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759863 | orchestrator | 2025-09-08 00:54:58.759871 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-08 00:54:58.759879 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.590) 0:04:04.239 ****** 2025-09-08 00:54:58.759887 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.759894 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.759902 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.759910 | orchestrator | 2025-09-08 00:54:58.759918 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-08 00:54:58.759925 | orchestrator | Monday 08 September 2025 00:47:35 +0000 (0:00:00.316) 0:04:04.556 ****** 2025-09-08 00:54:58.759933 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.759941 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.759949 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.759957 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.759964 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.759972 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.759980 | orchestrator | 2025-09-08 00:54:58.759988 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-08 00:54:58.759996 | orchestrator | Monday 08 September 2025 00:47:36 +0000 (0:00:00.620) 0:04:05.177 ****** 2025-09-08 00:54:58.760009 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.760017 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.760024 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.760032 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.760040 | orchestrator | 2025-09-08 00:54:58.760048 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-08 00:54:58.760056 | orchestrator | Monday 08 September 2025 00:47:37 +0000 (0:00:00.905) 0:04:06.082 ****** 2025-09-08 00:54:58.760064 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.760071 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.760079 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.760087 | orchestrator | 2025-09-08 00:54:58.760095 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-08 00:54:58.760103 | orchestrator | Monday 08 September 2025 00:47:37 +0000 (0:00:00.298) 0:04:06.381 ****** 2025-09-08 00:54:58.760110 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.760118 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.760126 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.760134 | orchestrator | 2025-09-08 00:54:58.760142 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-08 00:54:58.760155 | orchestrator | Monday 08 September 2025 00:47:38 +0000 (0:00:01.330) 0:04:07.711 ****** 2025-09-08 00:54:58.760163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:54:58.760170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:54:58.760178 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:54:58.760193 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760201 | orchestrator | 2025-09-08 00:54:58.760209 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-08 00:54:58.760217 | orchestrator | Monday 08 September 2025 00:47:39 +0000 (0:00:00.577) 0:04:08.289 ****** 2025-09-08 00:54:58.760225 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.760233 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.760241 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.760248 | orchestrator | 2025-09-08 00:54:58.760256 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-08 00:54:58.760264 | orchestrator | 2025-09-08 00:54:58.760272 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:54:58.760279 | orchestrator | Monday 08 September 2025 00:47:39 +0000 (0:00:00.540) 0:04:08.829 ****** 2025-09-08 00:54:58.760287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.760295 | orchestrator | 2025-09-08 00:54:58.760303 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:54:58.760311 | orchestrator | Monday 08 September 2025 00:47:40 +0000 (0:00:00.747) 0:04:09.577 ****** 2025-09-08 00:54:58.760319 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.760327 | orchestrator | 2025-09-08 00:54:58.760334 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:54:58.760342 | orchestrator | Monday 08 September 2025 00:47:41 +0000 (0:00:00.591) 0:04:10.168 ****** 2025-09-08 00:54:58.760350 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.760358 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.760366 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.760373 | orchestrator | 2025-09-08 00:54:58.760381 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:54:58.760389 | orchestrator | Monday 08 September 2025 00:47:42 +0000 (0:00:01.134) 0:04:11.303 ****** 2025-09-08 00:54:58.760397 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760405 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760413 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760420 | orchestrator | 2025-09-08 00:54:58.760428 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:54:58.760436 | orchestrator | Monday 08 September 2025 00:47:42 +0000 (0:00:00.323) 0:04:11.626 ****** 2025-09-08 00:54:58.760444 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760452 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760459 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760480 | orchestrator | 2025-09-08 00:54:58.760488 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:54:58.760496 | orchestrator | Monday 08 September 2025 00:47:42 +0000 (0:00:00.330) 0:04:11.956 ****** 2025-09-08 00:54:58.760504 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760512 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760519 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760527 | orchestrator | 2025-09-08 00:54:58.760535 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:54:58.760543 | orchestrator | Monday 08 September 2025 00:47:43 +0000 (0:00:00.405) 0:04:12.362 ****** 2025-09-08 00:54:58.760551 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.760558 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.760566 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.760574 | orchestrator | 2025-09-08 00:54:58.760587 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:54:58.760596 | orchestrator | Monday 08 September 2025 00:47:44 +0000 (0:00:01.106) 0:04:13.468 ****** 2025-09-08 00:54:58.760603 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760611 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760619 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760627 | orchestrator | 2025-09-08 00:54:58.760634 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:54:58.760642 | orchestrator | Monday 08 September 2025 00:47:44 +0000 (0:00:00.443) 0:04:13.912 ****** 2025-09-08 00:54:58.760650 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760658 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760666 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760673 | orchestrator | 2025-09-08 00:54:58.760681 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:54:58.760694 | orchestrator | Monday 08 September 2025 00:47:45 +0000 (0:00:00.369) 0:04:14.281 ****** 2025-09-08 00:54:58.760702 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.760710 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.760718 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.760726 | orchestrator | 2025-09-08 00:54:58.760734 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:54:58.760742 | orchestrator | Monday 08 September 2025 00:47:45 +0000 (0:00:00.755) 0:04:15.037 ****** 2025-09-08 00:54:58.760749 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.760757 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.760765 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.760773 | orchestrator | 2025-09-08 00:54:58.760781 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:54:58.760789 | orchestrator | Monday 08 September 2025 00:47:46 +0000 (0:00:00.751) 0:04:15.789 ****** 2025-09-08 00:54:58.760796 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760804 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760812 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760820 | orchestrator | 2025-09-08 00:54:58.760828 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:54:58.760835 | orchestrator | Monday 08 September 2025 00:47:47 +0000 (0:00:00.600) 0:04:16.390 ****** 2025-09-08 00:54:58.760843 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.760851 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.760859 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.760867 | orchestrator | 2025-09-08 00:54:58.760874 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:54:58.760882 | orchestrator | Monday 08 September 2025 00:47:47 +0000 (0:00:00.420) 0:04:16.810 ****** 2025-09-08 00:54:58.760894 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760902 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760910 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760917 | orchestrator | 2025-09-08 00:54:58.760925 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:54:58.760933 | orchestrator | Monday 08 September 2025 00:47:48 +0000 (0:00:00.352) 0:04:17.163 ****** 2025-09-08 00:54:58.760941 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760949 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.760956 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.760964 | orchestrator | 2025-09-08 00:54:58.760972 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:54:58.760980 | orchestrator | Monday 08 September 2025 00:47:48 +0000 (0:00:00.301) 0:04:17.465 ****** 2025-09-08 00:54:58.760988 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.760996 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.761004 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.761011 | orchestrator | 2025-09-08 00:54:58.761019 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:54:58.761033 | orchestrator | Monday 08 September 2025 00:47:48 +0000 (0:00:00.565) 0:04:18.030 ****** 2025-09-08 00:54:58.761041 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.761048 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.761056 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.761064 | orchestrator | 2025-09-08 00:54:58.761072 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:54:58.761080 | orchestrator | Monday 08 September 2025 00:47:49 +0000 (0:00:00.310) 0:04:18.340 ****** 2025-09-08 00:54:58.761087 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.761095 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.761103 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.761111 | orchestrator | 2025-09-08 00:54:58.761119 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:54:58.761126 | orchestrator | Monday 08 September 2025 00:47:49 +0000 (0:00:00.303) 0:04:18.644 ****** 2025-09-08 00:54:58.761134 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761142 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761150 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761158 | orchestrator | 2025-09-08 00:54:58.761166 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:54:58.761174 | orchestrator | Monday 08 September 2025 00:47:49 +0000 (0:00:00.322) 0:04:18.966 ****** 2025-09-08 00:54:58.761182 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761189 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761197 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761205 | orchestrator | 2025-09-08 00:54:58.761213 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:54:58.761221 | orchestrator | Monday 08 September 2025 00:47:50 +0000 (0:00:00.641) 0:04:19.608 ****** 2025-09-08 00:54:58.761229 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761236 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761244 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761252 | orchestrator | 2025-09-08 00:54:58.761260 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-08 00:54:58.761268 | orchestrator | Monday 08 September 2025 00:47:51 +0000 (0:00:00.572) 0:04:20.181 ****** 2025-09-08 00:54:58.761276 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761283 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761291 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761299 | orchestrator | 2025-09-08 00:54:58.761307 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-08 00:54:58.761315 | orchestrator | Monday 08 September 2025 00:47:51 +0000 (0:00:00.328) 0:04:20.509 ****** 2025-09-08 00:54:58.761322 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.761331 | orchestrator | 2025-09-08 00:54:58.761339 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-08 00:54:58.761347 | orchestrator | Monday 08 September 2025 00:47:52 +0000 (0:00:00.825) 0:04:21.335 ****** 2025-09-08 00:54:58.761355 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.761363 | orchestrator | 2025-09-08 00:54:58.761370 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-08 00:54:58.761378 | orchestrator | Monday 08 September 2025 00:47:52 +0000 (0:00:00.144) 0:04:21.479 ****** 2025-09-08 00:54:58.761386 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-08 00:54:58.761394 | orchestrator | 2025-09-08 00:54:58.761407 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-08 00:54:58.761415 | orchestrator | Monday 08 September 2025 00:47:53 +0000 (0:00:01.022) 0:04:22.502 ****** 2025-09-08 00:54:58.761423 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761431 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761439 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761446 | orchestrator | 2025-09-08 00:54:58.761454 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-08 00:54:58.761482 | orchestrator | Monday 08 September 2025 00:47:53 +0000 (0:00:00.412) 0:04:22.914 ****** 2025-09-08 00:54:58.761490 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761498 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761506 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761514 | orchestrator | 2025-09-08 00:54:58.761521 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-08 00:54:58.761529 | orchestrator | Monday 08 September 2025 00:47:54 +0000 (0:00:00.661) 0:04:23.576 ****** 2025-09-08 00:54:58.761537 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.761545 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.761553 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.761561 | orchestrator | 2025-09-08 00:54:58.761569 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-08 00:54:58.761576 | orchestrator | Monday 08 September 2025 00:47:55 +0000 (0:00:01.309) 0:04:24.886 ****** 2025-09-08 00:54:58.761584 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.761592 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.761600 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.761607 | orchestrator | 2025-09-08 00:54:58.761615 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-08 00:54:58.761627 | orchestrator | Monday 08 September 2025 00:47:56 +0000 (0:00:00.817) 0:04:25.703 ****** 2025-09-08 00:54:58.761636 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.761643 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.761651 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.761659 | orchestrator | 2025-09-08 00:54:58.761667 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-08 00:54:58.761675 | orchestrator | Monday 08 September 2025 00:47:57 +0000 (0:00:00.718) 0:04:26.421 ****** 2025-09-08 00:54:58.761683 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761691 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761698 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761706 | orchestrator | 2025-09-08 00:54:58.761714 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-08 00:54:58.761722 | orchestrator | Monday 08 September 2025 00:47:58 +0000 (0:00:01.103) 0:04:27.525 ****** 2025-09-08 00:54:58.761730 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.761738 | orchestrator | 2025-09-08 00:54:58.761745 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-08 00:54:58.761753 | orchestrator | Monday 08 September 2025 00:47:59 +0000 (0:00:01.280) 0:04:28.805 ****** 2025-09-08 00:54:58.761761 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761769 | orchestrator | 2025-09-08 00:54:58.761777 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-08 00:54:58.761785 | orchestrator | Monday 08 September 2025 00:48:00 +0000 (0:00:00.699) 0:04:29.504 ****** 2025-09-08 00:54:58.761793 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:54:58.761801 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.761809 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.761816 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:54:58.761824 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-08 00:54:58.761833 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:54:58.761840 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:54:58.761848 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-08 00:54:58.761856 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:54:58.761864 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-08 00:54:58.761872 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-08 00:54:58.761880 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-08 00:54:58.761892 | orchestrator | 2025-09-08 00:54:58.761900 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-08 00:54:58.761908 | orchestrator | Monday 08 September 2025 00:48:03 +0000 (0:00:03.138) 0:04:32.643 ****** 2025-09-08 00:54:58.761916 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.761923 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.761931 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.761939 | orchestrator | 2025-09-08 00:54:58.761947 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-08 00:54:58.761955 | orchestrator | Monday 08 September 2025 00:48:05 +0000 (0:00:01.620) 0:04:34.264 ****** 2025-09-08 00:54:58.761963 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.761970 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.761978 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.761986 | orchestrator | 2025-09-08 00:54:58.761994 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-08 00:54:58.762002 | orchestrator | Monday 08 September 2025 00:48:05 +0000 (0:00:00.356) 0:04:34.620 ****** 2025-09-08 00:54:58.762010 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.762107 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.762119 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.762126 | orchestrator | 2025-09-08 00:54:58.762134 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-08 00:54:58.762142 | orchestrator | Monday 08 September 2025 00:48:06 +0000 (0:00:00.642) 0:04:35.263 ****** 2025-09-08 00:54:58.762150 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.762158 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.762166 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.762174 | orchestrator | 2025-09-08 00:54:58.762181 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-08 00:54:58.762218 | orchestrator | Monday 08 September 2025 00:48:08 +0000 (0:00:02.217) 0:04:37.480 ****** 2025-09-08 00:54:58.762228 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.762236 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.762244 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.762252 | orchestrator | 2025-09-08 00:54:58.762260 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-08 00:54:58.762268 | orchestrator | Monday 08 September 2025 00:48:10 +0000 (0:00:02.146) 0:04:39.627 ****** 2025-09-08 00:54:58.762276 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.762284 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.762292 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.762300 | orchestrator | 2025-09-08 00:54:58.762308 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-08 00:54:58.762316 | orchestrator | Monday 08 September 2025 00:48:11 +0000 (0:00:00.460) 0:04:40.088 ****** 2025-09-08 00:54:58.762324 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.762332 | orchestrator | 2025-09-08 00:54:58.762340 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-08 00:54:58.762348 | orchestrator | Monday 08 September 2025 00:48:12 +0000 (0:00:01.149) 0:04:41.237 ****** 2025-09-08 00:54:58.762356 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.762364 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.762372 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.762380 | orchestrator | 2025-09-08 00:54:58.762388 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-08 00:54:58.762400 | orchestrator | Monday 08 September 2025 00:48:12 +0000 (0:00:00.305) 0:04:41.542 ****** 2025-09-08 00:54:58.762409 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.762417 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.762424 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.762432 | orchestrator | 2025-09-08 00:54:58.762440 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-08 00:54:58.762454 | orchestrator | Monday 08 September 2025 00:48:12 +0000 (0:00:00.339) 0:04:41.881 ****** 2025-09-08 00:54:58.762500 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.762509 | orchestrator | 2025-09-08 00:54:58.762517 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-08 00:54:58.762525 | orchestrator | Monday 08 September 2025 00:48:13 +0000 (0:00:00.767) 0:04:42.649 ****** 2025-09-08 00:54:58.762533 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.762541 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.762549 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.762556 | orchestrator | 2025-09-08 00:54:58.762564 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-08 00:54:58.762572 | orchestrator | Monday 08 September 2025 00:48:15 +0000 (0:00:01.577) 0:04:44.227 ****** 2025-09-08 00:54:58.762580 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.762588 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.762596 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.762604 | orchestrator | 2025-09-08 00:54:58.762612 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-08 00:54:58.762620 | orchestrator | Monday 08 September 2025 00:48:16 +0000 (0:00:01.263) 0:04:45.490 ****** 2025-09-08 00:54:58.762627 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.762635 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.762643 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.762651 | orchestrator | 2025-09-08 00:54:58.762659 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-08 00:54:58.762667 | orchestrator | Monday 08 September 2025 00:48:18 +0000 (0:00:02.129) 0:04:47.619 ****** 2025-09-08 00:54:58.762675 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.762682 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.762690 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.762698 | orchestrator | 2025-09-08 00:54:58.762706 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-08 00:54:58.762714 | orchestrator | Monday 08 September 2025 00:48:20 +0000 (0:00:01.979) 0:04:49.598 ****** 2025-09-08 00:54:58.762722 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.762730 | orchestrator | 2025-09-08 00:54:58.762737 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-08 00:54:58.762745 | orchestrator | Monday 08 September 2025 00:48:21 +0000 (0:00:00.544) 0:04:50.143 ****** 2025-09-08 00:54:58.762753 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.762761 | orchestrator | 2025-09-08 00:54:58.762769 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-08 00:54:58.762777 | orchestrator | Monday 08 September 2025 00:48:22 +0000 (0:00:01.433) 0:04:51.576 ****** 2025-09-08 00:54:58.762785 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.762793 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.762800 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.762808 | orchestrator | 2025-09-08 00:54:58.762816 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-08 00:54:58.762824 | orchestrator | Monday 08 September 2025 00:48:33 +0000 (0:00:10.850) 0:05:02.427 ****** 2025-09-08 00:54:58.762832 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.762840 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.762848 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.762855 | orchestrator | 2025-09-08 00:54:58.762863 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-08 00:54:58.762871 | orchestrator | Monday 08 September 2025 00:48:33 +0000 (0:00:00.333) 0:05:02.761 ****** 2025-09-08 00:54:58.762907 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd224c53046eb90076ce2f890bee71cfeaf39f3a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-08 00:54:58.762924 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd224c53046eb90076ce2f890bee71cfeaf39f3a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-08 00:54:58.762933 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd224c53046eb90076ce2f890bee71cfeaf39f3a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-08 00:54:58.762947 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd224c53046eb90076ce2f890bee71cfeaf39f3a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-08 00:54:58.762955 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd224c53046eb90076ce2f890bee71cfeaf39f3a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-08 00:54:58.762964 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__fd224c53046eb90076ce2f890bee71cfeaf39f3a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__fd224c53046eb90076ce2f890bee71cfeaf39f3a'}])  2025-09-08 00:54:58.762974 | orchestrator | 2025-09-08 00:54:58.762982 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:54:58.762990 | orchestrator | Monday 08 September 2025 00:48:48 +0000 (0:00:14.896) 0:05:17.657 ****** 2025-09-08 00:54:58.762998 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763006 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763013 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763021 | orchestrator | 2025-09-08 00:54:58.763028 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-08 00:54:58.763034 | orchestrator | Monday 08 September 2025 00:48:48 +0000 (0:00:00.362) 0:05:18.020 ****** 2025-09-08 00:54:58.763041 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.763048 | orchestrator | 2025-09-08 00:54:58.763054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-08 00:54:58.763061 | orchestrator | Monday 08 September 2025 00:48:49 +0000 (0:00:00.515) 0:05:18.535 ****** 2025-09-08 00:54:58.763068 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.763074 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.763081 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.763087 | orchestrator | 2025-09-08 00:54:58.763094 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-08 00:54:58.763101 | orchestrator | Monday 08 September 2025 00:48:50 +0000 (0:00:00.523) 0:05:19.058 ****** 2025-09-08 00:54:58.763107 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763114 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763120 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763127 | orchestrator | 2025-09-08 00:54:58.763138 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-08 00:54:58.763145 | orchestrator | Monday 08 September 2025 00:48:50 +0000 (0:00:00.381) 0:05:19.440 ****** 2025-09-08 00:54:58.763151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:54:58.763158 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:54:58.763165 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:54:58.763171 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763178 | orchestrator | 2025-09-08 00:54:58.763184 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-08 00:54:58.763191 | orchestrator | Monday 08 September 2025 00:48:51 +0000 (0:00:00.657) 0:05:20.097 ****** 2025-09-08 00:54:58.763198 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.763204 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.763211 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.763218 | orchestrator | 2025-09-08 00:54:58.763224 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-08 00:54:58.763231 | orchestrator | 2025-09-08 00:54:58.763237 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:54:58.763244 | orchestrator | Monday 08 September 2025 00:48:51 +0000 (0:00:00.545) 0:05:20.642 ****** 2025-09-08 00:54:58.763270 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.763278 | orchestrator | 2025-09-08 00:54:58.763285 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:54:58.763292 | orchestrator | Monday 08 September 2025 00:48:52 +0000 (0:00:00.832) 0:05:21.474 ****** 2025-09-08 00:54:58.763298 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.763305 | orchestrator | 2025-09-08 00:54:58.763312 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:54:58.763319 | orchestrator | Monday 08 September 2025 00:48:52 +0000 (0:00:00.576) 0:05:22.051 ****** 2025-09-08 00:54:58.763325 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.763332 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.763339 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.763345 | orchestrator | 2025-09-08 00:54:58.763352 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:54:58.763359 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:01.116) 0:05:23.167 ****** 2025-09-08 00:54:58.763366 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763372 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763379 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763385 | orchestrator | 2025-09-08 00:54:58.763392 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:54:58.763399 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:00.428) 0:05:23.595 ****** 2025-09-08 00:54:58.763406 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763416 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763423 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763429 | orchestrator | 2025-09-08 00:54:58.763436 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:54:58.763443 | orchestrator | Monday 08 September 2025 00:48:54 +0000 (0:00:00.326) 0:05:23.922 ****** 2025-09-08 00:54:58.763449 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763456 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763475 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763482 | orchestrator | 2025-09-08 00:54:58.763489 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:54:58.763496 | orchestrator | Monday 08 September 2025 00:48:55 +0000 (0:00:00.346) 0:05:24.268 ****** 2025-09-08 00:54:58.763502 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.763509 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.763521 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.763528 | orchestrator | 2025-09-08 00:54:58.763534 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:54:58.763541 | orchestrator | Monday 08 September 2025 00:48:56 +0000 (0:00:01.123) 0:05:25.392 ****** 2025-09-08 00:54:58.763548 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763555 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763561 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763568 | orchestrator | 2025-09-08 00:54:58.763575 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:54:58.763581 | orchestrator | Monday 08 September 2025 00:48:56 +0000 (0:00:00.480) 0:05:25.873 ****** 2025-09-08 00:54:58.763588 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763595 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763601 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763608 | orchestrator | 2025-09-08 00:54:58.763615 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:54:58.763621 | orchestrator | Monday 08 September 2025 00:48:57 +0000 (0:00:00.315) 0:05:26.189 ****** 2025-09-08 00:54:58.763628 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.763635 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.763641 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.763648 | orchestrator | 2025-09-08 00:54:58.763655 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:54:58.763661 | orchestrator | Monday 08 September 2025 00:48:58 +0000 (0:00:00.886) 0:05:27.076 ****** 2025-09-08 00:54:58.763668 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.763675 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.763681 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.763688 | orchestrator | 2025-09-08 00:54:58.763695 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:54:58.763701 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:01.347) 0:05:28.424 ****** 2025-09-08 00:54:58.763708 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763715 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763721 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763728 | orchestrator | 2025-09-08 00:54:58.763735 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:54:58.763741 | orchestrator | Monday 08 September 2025 00:48:59 +0000 (0:00:00.316) 0:05:28.740 ****** 2025-09-08 00:54:58.763748 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.763755 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.763761 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.763768 | orchestrator | 2025-09-08 00:54:58.763775 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:54:58.763781 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:00.332) 0:05:29.073 ****** 2025-09-08 00:54:58.763788 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763795 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763802 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763808 | orchestrator | 2025-09-08 00:54:58.763815 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:54:58.763822 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:00.375) 0:05:29.449 ****** 2025-09-08 00:54:58.763828 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763835 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763842 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763848 | orchestrator | 2025-09-08 00:54:58.763855 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:54:58.763862 | orchestrator | Monday 08 September 2025 00:49:00 +0000 (0:00:00.598) 0:05:30.047 ****** 2025-09-08 00:54:58.763887 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763895 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763902 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763913 | orchestrator | 2025-09-08 00:54:58.763920 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:54:58.763927 | orchestrator | Monday 08 September 2025 00:49:01 +0000 (0:00:00.339) 0:05:30.386 ****** 2025-09-08 00:54:58.763933 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763940 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763947 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763954 | orchestrator | 2025-09-08 00:54:58.763960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:54:58.763967 | orchestrator | Monday 08 September 2025 00:49:01 +0000 (0:00:00.372) 0:05:30.759 ****** 2025-09-08 00:54:58.763974 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.763980 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.763987 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.763994 | orchestrator | 2025-09-08 00:54:58.764000 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:54:58.764007 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:00.315) 0:05:31.075 ****** 2025-09-08 00:54:58.764014 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.764021 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.764027 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.764034 | orchestrator | 2025-09-08 00:54:58.764041 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:54:58.764048 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:00.590) 0:05:31.665 ****** 2025-09-08 00:54:58.764054 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.764061 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.764074 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.764081 | orchestrator | 2025-09-08 00:54:58.764087 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:54:58.764094 | orchestrator | Monday 08 September 2025 00:49:02 +0000 (0:00:00.343) 0:05:32.009 ****** 2025-09-08 00:54:58.764101 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.764107 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.764114 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.764121 | orchestrator | 2025-09-08 00:54:58.764127 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-08 00:54:58.764134 | orchestrator | Monday 08 September 2025 00:49:03 +0000 (0:00:00.561) 0:05:32.571 ****** 2025-09-08 00:54:58.764141 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:54:58.764148 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:54:58.764155 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:54:58.764161 | orchestrator | 2025-09-08 00:54:58.764168 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-08 00:54:58.764175 | orchestrator | Monday 08 September 2025 00:49:04 +0000 (0:00:00.866) 0:05:33.437 ****** 2025-09-08 00:54:58.764182 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.764188 | orchestrator | 2025-09-08 00:54:58.764195 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-08 00:54:58.764202 | orchestrator | Monday 08 September 2025 00:49:05 +0000 (0:00:00.769) 0:05:34.206 ****** 2025-09-08 00:54:58.764208 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.764215 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.764222 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.764228 | orchestrator | 2025-09-08 00:54:58.764235 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-08 00:54:58.764242 | orchestrator | Monday 08 September 2025 00:49:05 +0000 (0:00:00.687) 0:05:34.894 ****** 2025-09-08 00:54:58.764249 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.764255 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.764262 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.764269 | orchestrator | 2025-09-08 00:54:58.764280 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-08 00:54:58.764286 | orchestrator | Monday 08 September 2025 00:49:06 +0000 (0:00:00.363) 0:05:35.258 ****** 2025-09-08 00:54:58.764293 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:54:58.764300 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:54:58.764306 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:54:58.764313 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-08 00:54:58.764320 | orchestrator | 2025-09-08 00:54:58.764326 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-08 00:54:58.764333 | orchestrator | Monday 08 September 2025 00:49:17 +0000 (0:00:11.332) 0:05:46.591 ****** 2025-09-08 00:54:58.764340 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.764346 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.764353 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.764360 | orchestrator | 2025-09-08 00:54:58.764366 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-08 00:54:58.764373 | orchestrator | Monday 08 September 2025 00:49:18 +0000 (0:00:00.729) 0:05:47.320 ****** 2025-09-08 00:54:58.764380 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-08 00:54:58.764387 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 00:54:58.764393 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 00:54:58.764400 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-08 00:54:58.764407 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.764413 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.764420 | orchestrator | 2025-09-08 00:54:58.764427 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:54:58.764433 | orchestrator | Monday 08 September 2025 00:49:20 +0000 (0:00:02.144) 0:05:49.465 ****** 2025-09-08 00:54:58.764440 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-08 00:54:58.764447 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 00:54:58.764485 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 00:54:58.764493 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 00:54:58.764500 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-08 00:54:58.764507 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-08 00:54:58.764514 | orchestrator | 2025-09-08 00:54:58.764520 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-08 00:54:58.764527 | orchestrator | Monday 08 September 2025 00:49:21 +0000 (0:00:01.382) 0:05:50.847 ****** 2025-09-08 00:54:58.764534 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.764540 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.764547 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.764554 | orchestrator | 2025-09-08 00:54:58.764560 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-08 00:54:58.764567 | orchestrator | Monday 08 September 2025 00:49:22 +0000 (0:00:00.698) 0:05:51.546 ****** 2025-09-08 00:54:58.764574 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.764580 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.764587 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.764594 | orchestrator | 2025-09-08 00:54:58.764600 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-08 00:54:58.764607 | orchestrator | Monday 08 September 2025 00:49:23 +0000 (0:00:00.607) 0:05:52.154 ****** 2025-09-08 00:54:58.764614 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.764620 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.764627 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.764634 | orchestrator | 2025-09-08 00:54:58.764640 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-08 00:54:58.764651 | orchestrator | Monday 08 September 2025 00:49:23 +0000 (0:00:00.314) 0:05:52.468 ****** 2025-09-08 00:54:58.764662 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.764669 | orchestrator | 2025-09-08 00:54:58.764676 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-08 00:54:58.764682 | orchestrator | Monday 08 September 2025 00:49:23 +0000 (0:00:00.534) 0:05:53.003 ****** 2025-09-08 00:54:58.764689 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.764696 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.764702 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.764709 | orchestrator | 2025-09-08 00:54:58.764716 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-08 00:54:58.764722 | orchestrator | Monday 08 September 2025 00:49:24 +0000 (0:00:00.566) 0:05:53.569 ****** 2025-09-08 00:54:58.764729 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.764736 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.764742 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.764749 | orchestrator | 2025-09-08 00:54:58.764756 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-08 00:54:58.764762 | orchestrator | Monday 08 September 2025 00:49:24 +0000 (0:00:00.325) 0:05:53.895 ****** 2025-09-08 00:54:58.764769 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.764776 | orchestrator | 2025-09-08 00:54:58.764782 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-08 00:54:58.764789 | orchestrator | Monday 08 September 2025 00:49:25 +0000 (0:00:00.532) 0:05:54.427 ****** 2025-09-08 00:54:58.764796 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.764802 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.764809 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.764816 | orchestrator | 2025-09-08 00:54:58.764822 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-08 00:54:58.764829 | orchestrator | Monday 08 September 2025 00:49:26 +0000 (0:00:01.593) 0:05:56.020 ****** 2025-09-08 00:54:58.764836 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.764843 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.764849 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.764856 | orchestrator | 2025-09-08 00:54:58.764862 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-08 00:54:58.764869 | orchestrator | Monday 08 September 2025 00:49:28 +0000 (0:00:01.372) 0:05:57.393 ****** 2025-09-08 00:54:58.764876 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.764882 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.764889 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.764896 | orchestrator | 2025-09-08 00:54:58.764902 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-08 00:54:58.764909 | orchestrator | Monday 08 September 2025 00:49:30 +0000 (0:00:01.732) 0:05:59.126 ****** 2025-09-08 00:54:58.764916 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.764923 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.764929 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.764936 | orchestrator | 2025-09-08 00:54:58.764943 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-08 00:54:58.764949 | orchestrator | Monday 08 September 2025 00:49:32 +0000 (0:00:02.858) 0:06:01.984 ****** 2025-09-08 00:54:58.764956 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.764963 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.764969 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-08 00:54:58.764976 | orchestrator | 2025-09-08 00:54:58.764983 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-08 00:54:58.764990 | orchestrator | Monday 08 September 2025 00:49:33 +0000 (0:00:00.709) 0:06:02.693 ****** 2025-09-08 00:54:58.764996 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-08 00:54:58.765008 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-08 00:54:58.765015 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-08 00:54:58.765040 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-08 00:54:58.765048 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-08 00:54:58.765055 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:54:58.765062 | orchestrator | 2025-09-08 00:54:58.765069 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-08 00:54:58.765075 | orchestrator | Monday 08 September 2025 00:50:03 +0000 (0:00:30.216) 0:06:32.910 ****** 2025-09-08 00:54:58.765082 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:54:58.765089 | orchestrator | 2025-09-08 00:54:58.765095 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-08 00:54:58.765102 | orchestrator | Monday 08 September 2025 00:50:05 +0000 (0:00:01.364) 0:06:34.274 ****** 2025-09-08 00:54:58.765109 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.765115 | orchestrator | 2025-09-08 00:54:58.765122 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-08 00:54:58.765129 | orchestrator | Monday 08 September 2025 00:50:05 +0000 (0:00:00.348) 0:06:34.622 ****** 2025-09-08 00:54:58.765136 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.765142 | orchestrator | 2025-09-08 00:54:58.765149 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-08 00:54:58.765156 | orchestrator | Monday 08 September 2025 00:50:05 +0000 (0:00:00.138) 0:06:34.761 ****** 2025-09-08 00:54:58.765162 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-08 00:54:58.765172 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-08 00:54:58.765179 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-08 00:54:58.765186 | orchestrator | 2025-09-08 00:54:58.765193 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-08 00:54:58.765199 | orchestrator | Monday 08 September 2025 00:50:12 +0000 (0:00:06.366) 0:06:41.127 ****** 2025-09-08 00:54:58.765206 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-08 00:54:58.765213 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-08 00:54:58.765220 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-08 00:54:58.765226 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-08 00:54:58.765233 | orchestrator | 2025-09-08 00:54:58.765240 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:54:58.765246 | orchestrator | Monday 08 September 2025 00:50:16 +0000 (0:00:04.893) 0:06:46.020 ****** 2025-09-08 00:54:58.765253 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.765260 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.765267 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.765273 | orchestrator | 2025-09-08 00:54:58.765280 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-08 00:54:58.765287 | orchestrator | Monday 08 September 2025 00:50:17 +0000 (0:00:00.641) 0:06:46.661 ****** 2025-09-08 00:54:58.765293 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:54:58.765300 | orchestrator | 2025-09-08 00:54:58.765307 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-08 00:54:58.765313 | orchestrator | Monday 08 September 2025 00:50:18 +0000 (0:00:00.629) 0:06:47.291 ****** 2025-09-08 00:54:58.765320 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.765331 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.765338 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.765345 | orchestrator | 2025-09-08 00:54:58.765351 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-08 00:54:58.765358 | orchestrator | Monday 08 September 2025 00:50:18 +0000 (0:00:00.592) 0:06:47.884 ****** 2025-09-08 00:54:58.765365 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.765371 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.765378 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.765385 | orchestrator | 2025-09-08 00:54:58.765391 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-08 00:54:58.765398 | orchestrator | Monday 08 September 2025 00:50:20 +0000 (0:00:01.194) 0:06:49.078 ****** 2025-09-08 00:54:58.765405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-08 00:54:58.765412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-08 00:54:58.765418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-08 00:54:58.765425 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.765432 | orchestrator | 2025-09-08 00:54:58.765439 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-08 00:54:58.765445 | orchestrator | Monday 08 September 2025 00:50:20 +0000 (0:00:00.667) 0:06:49.746 ****** 2025-09-08 00:54:58.765452 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.765459 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.765480 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.765486 | orchestrator | 2025-09-08 00:54:58.765493 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-08 00:54:58.765500 | orchestrator | 2025-09-08 00:54:58.765506 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:54:58.765513 | orchestrator | Monday 08 September 2025 00:50:21 +0000 (0:00:00.820) 0:06:50.566 ****** 2025-09-08 00:54:58.765520 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.765527 | orchestrator | 2025-09-08 00:54:58.765533 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:54:58.765540 | orchestrator | Monday 08 September 2025 00:50:22 +0000 (0:00:00.541) 0:06:51.108 ****** 2025-09-08 00:54:58.765568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.765576 | orchestrator | 2025-09-08 00:54:58.765583 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:54:58.765589 | orchestrator | Monday 08 September 2025 00:50:22 +0000 (0:00:00.743) 0:06:51.851 ****** 2025-09-08 00:54:58.765596 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.765603 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.765609 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.765616 | orchestrator | 2025-09-08 00:54:58.765623 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:54:58.765629 | orchestrator | Monday 08 September 2025 00:50:23 +0000 (0:00:00.325) 0:06:52.176 ****** 2025-09-08 00:54:58.765636 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.765643 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.765649 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.765656 | orchestrator | 2025-09-08 00:54:58.765663 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:54:58.765670 | orchestrator | Monday 08 September 2025 00:50:23 +0000 (0:00:00.654) 0:06:52.831 ****** 2025-09-08 00:54:58.765676 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.765683 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.765690 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.765696 | orchestrator | 2025-09-08 00:54:58.765703 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:54:58.765710 | orchestrator | Monday 08 September 2025 00:50:24 +0000 (0:00:00.754) 0:06:53.585 ****** 2025-09-08 00:54:58.765722 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.765729 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.765736 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.765742 | orchestrator | 2025-09-08 00:54:58.765752 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:54:58.765759 | orchestrator | Monday 08 September 2025 00:50:25 +0000 (0:00:00.727) 0:06:54.312 ****** 2025-09-08 00:54:58.765766 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.765773 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.765779 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.765786 | orchestrator | 2025-09-08 00:54:58.765793 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:54:58.765799 | orchestrator | Monday 08 September 2025 00:50:25 +0000 (0:00:00.713) 0:06:55.026 ****** 2025-09-08 00:54:58.765806 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.765813 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.765819 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.765826 | orchestrator | 2025-09-08 00:54:58.765832 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:54:58.765839 | orchestrator | Monday 08 September 2025 00:50:26 +0000 (0:00:00.326) 0:06:55.352 ****** 2025-09-08 00:54:58.765846 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.765852 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.765859 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.765866 | orchestrator | 2025-09-08 00:54:58.765873 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:54:58.765879 | orchestrator | Monday 08 September 2025 00:50:26 +0000 (0:00:00.310) 0:06:55.663 ****** 2025-09-08 00:54:58.765886 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.765893 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.765899 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.765906 | orchestrator | 2025-09-08 00:54:58.765913 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:54:58.765919 | orchestrator | Monday 08 September 2025 00:50:27 +0000 (0:00:00.646) 0:06:56.309 ****** 2025-09-08 00:54:58.765926 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.765932 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.765939 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.765946 | orchestrator | 2025-09-08 00:54:58.765953 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:54:58.765959 | orchestrator | Monday 08 September 2025 00:50:28 +0000 (0:00:01.046) 0:06:57.355 ****** 2025-09-08 00:54:58.765966 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.765973 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.765979 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.765986 | orchestrator | 2025-09-08 00:54:58.765993 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:54:58.765999 | orchestrator | Monday 08 September 2025 00:50:28 +0000 (0:00:00.330) 0:06:57.686 ****** 2025-09-08 00:54:58.766006 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.766013 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.766054 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.766066 | orchestrator | 2025-09-08 00:54:58.766077 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:54:58.766088 | orchestrator | Monday 08 September 2025 00:50:28 +0000 (0:00:00.308) 0:06:57.994 ****** 2025-09-08 00:54:58.766100 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766107 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766114 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766120 | orchestrator | 2025-09-08 00:54:58.766127 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:54:58.766134 | orchestrator | Monday 08 September 2025 00:50:29 +0000 (0:00:00.316) 0:06:58.311 ****** 2025-09-08 00:54:58.766140 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766152 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766159 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766165 | orchestrator | 2025-09-08 00:54:58.766172 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:54:58.766179 | orchestrator | Monday 08 September 2025 00:50:29 +0000 (0:00:00.659) 0:06:58.971 ****** 2025-09-08 00:54:58.766186 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766192 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766199 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766205 | orchestrator | 2025-09-08 00:54:58.766212 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:54:58.766219 | orchestrator | Monday 08 September 2025 00:50:30 +0000 (0:00:00.351) 0:06:59.322 ****** 2025-09-08 00:54:58.766225 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.766232 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.766239 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.766245 | orchestrator | 2025-09-08 00:54:58.766257 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:54:58.766264 | orchestrator | Monday 08 September 2025 00:50:30 +0000 (0:00:00.303) 0:06:59.625 ****** 2025-09-08 00:54:58.766271 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.766278 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.766284 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.766291 | orchestrator | 2025-09-08 00:54:58.766297 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:54:58.766304 | orchestrator | Monday 08 September 2025 00:50:30 +0000 (0:00:00.304) 0:06:59.929 ****** 2025-09-08 00:54:58.766311 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.766317 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.766324 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.766330 | orchestrator | 2025-09-08 00:54:58.766337 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:54:58.766343 | orchestrator | Monday 08 September 2025 00:50:31 +0000 (0:00:00.572) 0:07:00.502 ****** 2025-09-08 00:54:58.766350 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766356 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766363 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766369 | orchestrator | 2025-09-08 00:54:58.766376 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:54:58.766383 | orchestrator | Monday 08 September 2025 00:50:31 +0000 (0:00:00.338) 0:07:00.841 ****** 2025-09-08 00:54:58.766389 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766396 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766402 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766409 | orchestrator | 2025-09-08 00:54:58.766419 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-08 00:54:58.766426 | orchestrator | Monday 08 September 2025 00:50:32 +0000 (0:00:00.520) 0:07:01.361 ****** 2025-09-08 00:54:58.766432 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766439 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766445 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766452 | orchestrator | 2025-09-08 00:54:58.766458 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-08 00:54:58.766480 | orchestrator | Monday 08 September 2025 00:50:32 +0000 (0:00:00.607) 0:07:01.968 ****** 2025-09-08 00:54:58.766487 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:54:58.766494 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:54:58.766500 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:54:58.766507 | orchestrator | 2025-09-08 00:54:58.766514 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-08 00:54:58.766520 | orchestrator | Monday 08 September 2025 00:50:33 +0000 (0:00:00.654) 0:07:02.623 ****** 2025-09-08 00:54:58.766527 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.766538 | orchestrator | 2025-09-08 00:54:58.766545 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-08 00:54:58.766552 | orchestrator | Monday 08 September 2025 00:50:34 +0000 (0:00:00.518) 0:07:03.141 ****** 2025-09-08 00:54:58.766558 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.766565 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.766572 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.766578 | orchestrator | 2025-09-08 00:54:58.766585 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-08 00:54:58.766591 | orchestrator | Monday 08 September 2025 00:50:34 +0000 (0:00:00.545) 0:07:03.686 ****** 2025-09-08 00:54:58.766598 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.766605 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.766611 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.766618 | orchestrator | 2025-09-08 00:54:58.766624 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-08 00:54:58.766631 | orchestrator | Monday 08 September 2025 00:50:34 +0000 (0:00:00.309) 0:07:03.996 ****** 2025-09-08 00:54:58.766638 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766644 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766651 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766658 | orchestrator | 2025-09-08 00:54:58.766664 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-08 00:54:58.766671 | orchestrator | Monday 08 September 2025 00:50:35 +0000 (0:00:00.706) 0:07:04.703 ****** 2025-09-08 00:54:58.766678 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.766684 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.766691 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.766697 | orchestrator | 2025-09-08 00:54:58.766704 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-08 00:54:58.766711 | orchestrator | Monday 08 September 2025 00:50:35 +0000 (0:00:00.340) 0:07:05.043 ****** 2025-09-08 00:54:58.766717 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-08 00:54:58.766724 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-08 00:54:58.766731 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-08 00:54:58.766737 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-08 00:54:58.766744 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-08 00:54:58.766750 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-08 00:54:58.766757 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-08 00:54:58.766764 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-08 00:54:58.766775 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-08 00:54:58.766782 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-08 00:54:58.766789 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-08 00:54:58.766795 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-08 00:54:58.766802 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-08 00:54:58.766809 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-08 00:54:58.766815 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-08 00:54:58.766822 | orchestrator | 2025-09-08 00:54:58.766828 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-08 00:54:58.766840 | orchestrator | Monday 08 September 2025 00:50:39 +0000 (0:00:03.323) 0:07:08.367 ****** 2025-09-08 00:54:58.766846 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.766853 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.766860 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.766866 | orchestrator | 2025-09-08 00:54:58.766873 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-08 00:54:58.766879 | orchestrator | Monday 08 September 2025 00:50:39 +0000 (0:00:00.322) 0:07:08.689 ****** 2025-09-08 00:54:58.766890 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.766897 | orchestrator | 2025-09-08 00:54:58.766903 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-08 00:54:58.766910 | orchestrator | Monday 08 September 2025 00:50:40 +0000 (0:00:00.572) 0:07:09.261 ****** 2025-09-08 00:54:58.766917 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-08 00:54:58.766923 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-08 00:54:58.766930 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-08 00:54:58.766936 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-08 00:54:58.766943 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-08 00:54:58.766949 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-08 00:54:58.766956 | orchestrator | 2025-09-08 00:54:58.766962 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-08 00:54:58.766969 | orchestrator | Monday 08 September 2025 00:50:41 +0000 (0:00:01.194) 0:07:10.456 ****** 2025-09-08 00:54:58.766975 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.766982 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:54:58.766988 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:54:58.766995 | orchestrator | 2025-09-08 00:54:58.767002 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:54:58.767008 | orchestrator | Monday 08 September 2025 00:50:43 +0000 (0:00:02.055) 0:07:12.511 ****** 2025-09-08 00:54:58.767015 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:54:58.767021 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:54:58.767028 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.767035 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:54:58.767041 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-08 00:54:58.767048 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.767054 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:54:58.767061 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-08 00:54:58.767067 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.767074 | orchestrator | 2025-09-08 00:54:58.767081 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-08 00:54:58.767087 | orchestrator | Monday 08 September 2025 00:50:44 +0000 (0:00:01.189) 0:07:13.701 ****** 2025-09-08 00:54:58.767094 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:54:58.767100 | orchestrator | 2025-09-08 00:54:58.767107 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-08 00:54:58.767113 | orchestrator | Monday 08 September 2025 00:50:46 +0000 (0:00:02.043) 0:07:15.745 ****** 2025-09-08 00:54:58.767120 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.767127 | orchestrator | 2025-09-08 00:54:58.767133 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-08 00:54:58.767140 | orchestrator | Monday 08 September 2025 00:50:47 +0000 (0:00:00.557) 0:07:16.302 ****** 2025-09-08 00:54:58.767146 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6245231a-5e27-588f-a545-a88193777b58', 'data_vg': 'ceph-6245231a-5e27-588f-a545-a88193777b58'}) 2025-09-08 00:54:58.767159 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a', 'data_vg': 'ceph-39881e3d-2712-5fd1-9b8f-3e1ed3474a2a'}) 2025-09-08 00:54:58.767166 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2', 'data_vg': 'ceph-8709f3ee-6295-5c1a-8e33-a410dc9aa8e2'}) 2025-09-08 00:54:58.767173 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7231c7d5-5dfe-5215-9efd-b7a5c24f93db', 'data_vg': 'ceph-7231c7d5-5dfe-5215-9efd-b7a5c24f93db'}) 2025-09-08 00:54:58.767184 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e84ec590-0593-5433-8536-9c5125166743', 'data_vg': 'ceph-e84ec590-0593-5433-8536-9c5125166743'}) 2025-09-08 00:54:58.767191 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf', 'data_vg': 'ceph-2f5f4832-0bc1-5ef5-ba0d-5b3759bf17bf'}) 2025-09-08 00:54:58.767197 | orchestrator | 2025-09-08 00:54:58.767204 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-08 00:54:58.767211 | orchestrator | Monday 08 September 2025 00:51:32 +0000 (0:00:45.614) 0:08:01.917 ****** 2025-09-08 00:54:58.767217 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.767224 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.767230 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.767237 | orchestrator | 2025-09-08 00:54:58.767243 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-08 00:54:58.767250 | orchestrator | Monday 08 September 2025 00:51:33 +0000 (0:00:00.358) 0:08:02.275 ****** 2025-09-08 00:54:58.767257 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.767263 | orchestrator | 2025-09-08 00:54:58.767270 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-08 00:54:58.767277 | orchestrator | Monday 08 September 2025 00:51:33 +0000 (0:00:00.560) 0:08:02.836 ****** 2025-09-08 00:54:58.767283 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.767290 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.767296 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.767303 | orchestrator | 2025-09-08 00:54:58.767309 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-08 00:54:58.767322 | orchestrator | Monday 08 September 2025 00:51:34 +0000 (0:00:00.966) 0:08:03.803 ****** 2025-09-08 00:54:58.767329 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.767336 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.767342 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.767349 | orchestrator | 2025-09-08 00:54:58.767355 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-08 00:54:58.767362 | orchestrator | Monday 08 September 2025 00:51:37 +0000 (0:00:02.575) 0:08:06.378 ****** 2025-09-08 00:54:58.767369 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.767375 | orchestrator | 2025-09-08 00:54:58.767382 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-08 00:54:58.767388 | orchestrator | Monday 08 September 2025 00:51:37 +0000 (0:00:00.599) 0:08:06.978 ****** 2025-09-08 00:54:58.767395 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.767401 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.767408 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.767414 | orchestrator | 2025-09-08 00:54:58.767421 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-08 00:54:58.767428 | orchestrator | Monday 08 September 2025 00:51:39 +0000 (0:00:01.540) 0:08:08.518 ****** 2025-09-08 00:54:58.767434 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.767441 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.767447 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.767454 | orchestrator | 2025-09-08 00:54:58.767499 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-08 00:54:58.767507 | orchestrator | Monday 08 September 2025 00:51:40 +0000 (0:00:01.199) 0:08:09.718 ****** 2025-09-08 00:54:58.767514 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.767520 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.767527 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.767533 | orchestrator | 2025-09-08 00:54:58.767540 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-08 00:54:58.767547 | orchestrator | Monday 08 September 2025 00:51:42 +0000 (0:00:01.733) 0:08:11.451 ****** 2025-09-08 00:54:58.767553 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.767560 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.767566 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.767573 | orchestrator | 2025-09-08 00:54:58.767579 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-08 00:54:58.767586 | orchestrator | Monday 08 September 2025 00:51:42 +0000 (0:00:00.345) 0:08:11.796 ****** 2025-09-08 00:54:58.767593 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.767599 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.767606 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.767612 | orchestrator | 2025-09-08 00:54:58.767619 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-08 00:54:58.767626 | orchestrator | Monday 08 September 2025 00:51:43 +0000 (0:00:00.594) 0:08:12.391 ****** 2025-09-08 00:54:58.767632 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-08 00:54:58.767639 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-09-08 00:54:58.767646 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-08 00:54:58.767652 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:54:58.767659 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-08 00:54:58.767665 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-08 00:54:58.767672 | orchestrator | 2025-09-08 00:54:58.767678 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-08 00:54:58.767685 | orchestrator | Monday 08 September 2025 00:51:44 +0000 (0:00:01.128) 0:08:13.520 ****** 2025-09-08 00:54:58.767692 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-08 00:54:58.767698 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-08 00:54:58.767705 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-08 00:54:58.767711 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-08 00:54:58.767718 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-08 00:54:58.767725 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-08 00:54:58.767731 | orchestrator | 2025-09-08 00:54:58.767738 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-08 00:54:58.767744 | orchestrator | Monday 08 September 2025 00:51:46 +0000 (0:00:02.523) 0:08:16.043 ****** 2025-09-08 00:54:58.767751 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-08 00:54:58.767757 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-08 00:54:58.767768 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-08 00:54:58.767775 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-08 00:54:58.767782 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-08 00:54:58.767788 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-08 00:54:58.767795 | orchestrator | 2025-09-08 00:54:58.767802 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-08 00:54:58.767808 | orchestrator | Monday 08 September 2025 00:51:50 +0000 (0:00:03.476) 0:08:19.520 ****** 2025-09-08 00:54:58.767815 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.767821 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.767828 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:54:58.767834 | orchestrator | 2025-09-08 00:54:58.767841 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-08 00:54:58.767848 | orchestrator | Monday 08 September 2025 00:51:53 +0000 (0:00:02.601) 0:08:22.121 ****** 2025-09-08 00:54:58.767857 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.767864 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.767870 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-08 00:54:58.767877 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:54:58.767884 | orchestrator | 2025-09-08 00:54:58.767890 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-08 00:54:58.767897 | orchestrator | Monday 08 September 2025 00:52:05 +0000 (0:00:12.615) 0:08:34.737 ****** 2025-09-08 00:54:58.767904 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.767914 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.767920 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.767927 | orchestrator | 2025-09-08 00:54:58.767933 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:54:58.767940 | orchestrator | Monday 08 September 2025 00:52:06 +0000 (0:00:01.060) 0:08:35.798 ****** 2025-09-08 00:54:58.767947 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.767953 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.767960 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.767966 | orchestrator | 2025-09-08 00:54:58.767973 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-08 00:54:58.767979 | orchestrator | Monday 08 September 2025 00:52:07 +0000 (0:00:00.339) 0:08:36.138 ****** 2025-09-08 00:54:58.767986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.767993 | orchestrator | 2025-09-08 00:54:58.767999 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-08 00:54:58.768006 | orchestrator | Monday 08 September 2025 00:52:07 +0000 (0:00:00.540) 0:08:36.679 ****** 2025-09-08 00:54:58.768012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.768019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.768025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.768031 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768038 | orchestrator | 2025-09-08 00:54:58.768044 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-08 00:54:58.768050 | orchestrator | Monday 08 September 2025 00:52:08 +0000 (0:00:00.690) 0:08:37.369 ****** 2025-09-08 00:54:58.768056 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768062 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.768068 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.768074 | orchestrator | 2025-09-08 00:54:58.768080 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-08 00:54:58.768087 | orchestrator | Monday 08 September 2025 00:52:08 +0000 (0:00:00.624) 0:08:37.994 ****** 2025-09-08 00:54:58.768093 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768099 | orchestrator | 2025-09-08 00:54:58.768105 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-08 00:54:58.768111 | orchestrator | Monday 08 September 2025 00:52:09 +0000 (0:00:00.232) 0:08:38.226 ****** 2025-09-08 00:54:58.768117 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768123 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.768129 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.768135 | orchestrator | 2025-09-08 00:54:58.768142 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-08 00:54:58.768148 | orchestrator | Monday 08 September 2025 00:52:09 +0000 (0:00:00.327) 0:08:38.554 ****** 2025-09-08 00:54:58.768154 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768160 | orchestrator | 2025-09-08 00:54:58.768166 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-08 00:54:58.768172 | orchestrator | Monday 08 September 2025 00:52:09 +0000 (0:00:00.223) 0:08:38.777 ****** 2025-09-08 00:54:58.768182 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768188 | orchestrator | 2025-09-08 00:54:58.768194 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-08 00:54:58.768201 | orchestrator | Monday 08 September 2025 00:52:09 +0000 (0:00:00.206) 0:08:38.983 ****** 2025-09-08 00:54:58.768207 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768213 | orchestrator | 2025-09-08 00:54:58.768219 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-08 00:54:58.768225 | orchestrator | Monday 08 September 2025 00:52:10 +0000 (0:00:00.134) 0:08:39.117 ****** 2025-09-08 00:54:58.768231 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768237 | orchestrator | 2025-09-08 00:54:58.768243 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-08 00:54:58.768249 | orchestrator | Monday 08 September 2025 00:52:10 +0000 (0:00:00.234) 0:08:39.351 ****** 2025-09-08 00:54:58.768256 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768262 | orchestrator | 2025-09-08 00:54:58.768268 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-08 00:54:58.768274 | orchestrator | Monday 08 September 2025 00:52:10 +0000 (0:00:00.242) 0:08:39.594 ****** 2025-09-08 00:54:58.768283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.768290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.768296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.768302 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768308 | orchestrator | 2025-09-08 00:54:58.768314 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-08 00:54:58.768320 | orchestrator | Monday 08 September 2025 00:52:11 +0000 (0:00:00.953) 0:08:40.547 ****** 2025-09-08 00:54:58.768326 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768332 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.768339 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.768345 | orchestrator | 2025-09-08 00:54:58.768351 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-08 00:54:58.768357 | orchestrator | Monday 08 September 2025 00:52:11 +0000 (0:00:00.319) 0:08:40.867 ****** 2025-09-08 00:54:58.768363 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768369 | orchestrator | 2025-09-08 00:54:58.768375 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-08 00:54:58.768382 | orchestrator | Monday 08 September 2025 00:52:12 +0000 (0:00:00.258) 0:08:41.126 ****** 2025-09-08 00:54:58.768388 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768394 | orchestrator | 2025-09-08 00:54:58.768400 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-08 00:54:58.768406 | orchestrator | 2025-09-08 00:54:58.768412 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:54:58.768418 | orchestrator | Monday 08 September 2025 00:52:12 +0000 (0:00:00.634) 0:08:41.760 ****** 2025-09-08 00:54:58.768428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.768435 | orchestrator | 2025-09-08 00:54:58.768441 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:54:58.768448 | orchestrator | Monday 08 September 2025 00:52:14 +0000 (0:00:01.427) 0:08:43.188 ****** 2025-09-08 00:54:58.768454 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.768472 | orchestrator | 2025-09-08 00:54:58.768479 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:54:58.768485 | orchestrator | Monday 08 September 2025 00:52:15 +0000 (0:00:01.207) 0:08:44.396 ****** 2025-09-08 00:54:58.768491 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.768501 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768507 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.768513 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.768520 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.768526 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.768532 | orchestrator | 2025-09-08 00:54:58.768538 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:54:58.768544 | orchestrator | Monday 08 September 2025 00:52:16 +0000 (0:00:01.041) 0:08:45.438 ****** 2025-09-08 00:54:58.768550 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.768556 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.768563 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.768569 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.768575 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.768581 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.768587 | orchestrator | 2025-09-08 00:54:58.768593 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:54:58.768599 | orchestrator | Monday 08 September 2025 00:52:17 +0000 (0:00:01.080) 0:08:46.518 ****** 2025-09-08 00:54:58.768605 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.768612 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.768618 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.768624 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.768630 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.768636 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.768642 | orchestrator | 2025-09-08 00:54:58.768648 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:54:58.768654 | orchestrator | Monday 08 September 2025 00:52:18 +0000 (0:00:01.272) 0:08:47.791 ****** 2025-09-08 00:54:58.768660 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.768666 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.768672 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.768679 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.768685 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.768691 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.768697 | orchestrator | 2025-09-08 00:54:58.768703 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:54:58.768709 | orchestrator | Monday 08 September 2025 00:52:19 +0000 (0:00:01.000) 0:08:48.792 ****** 2025-09-08 00:54:58.768716 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.768722 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768728 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.768734 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.768740 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.768746 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.768752 | orchestrator | 2025-09-08 00:54:58.768758 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:54:58.768764 | orchestrator | Monday 08 September 2025 00:52:20 +0000 (0:00:01.025) 0:08:49.817 ****** 2025-09-08 00:54:58.768770 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.768777 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.768783 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.768789 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768795 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.768801 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.768807 | orchestrator | 2025-09-08 00:54:58.768813 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:54:58.768819 | orchestrator | Monday 08 September 2025 00:52:21 +0000 (0:00:00.582) 0:08:50.400 ****** 2025-09-08 00:54:58.768829 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.768835 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.768841 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.768847 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.768853 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.768864 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.768870 | orchestrator | 2025-09-08 00:54:58.768876 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:54:58.768882 | orchestrator | Monday 08 September 2025 00:52:22 +0000 (0:00:00.875) 0:08:51.275 ****** 2025-09-08 00:54:58.768888 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.768894 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.768901 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.768907 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.768913 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.768919 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.768925 | orchestrator | 2025-09-08 00:54:58.768931 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:54:58.768937 | orchestrator | Monday 08 September 2025 00:52:23 +0000 (0:00:01.121) 0:08:52.396 ****** 2025-09-08 00:54:58.768943 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.768949 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.768955 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.768961 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.768967 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.768973 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.768980 | orchestrator | 2025-09-08 00:54:58.768986 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:54:58.768992 | orchestrator | Monday 08 September 2025 00:52:24 +0000 (0:00:01.403) 0:08:53.800 ****** 2025-09-08 00:54:58.768998 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.769008 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.769015 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.769021 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.769027 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.769033 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.769039 | orchestrator | 2025-09-08 00:54:58.769045 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:54:58.769051 | orchestrator | Monday 08 September 2025 00:52:25 +0000 (0:00:00.602) 0:08:54.403 ****** 2025-09-08 00:54:58.769057 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.769064 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.769070 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.769076 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.769082 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.769088 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.769094 | orchestrator | 2025-09-08 00:54:58.769100 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:54:58.769107 | orchestrator | Monday 08 September 2025 00:52:26 +0000 (0:00:00.897) 0:08:55.300 ****** 2025-09-08 00:54:58.769113 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.769119 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.769125 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.769131 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.769137 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.769143 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.769149 | orchestrator | 2025-09-08 00:54:58.769155 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:54:58.769162 | orchestrator | Monday 08 September 2025 00:52:26 +0000 (0:00:00.672) 0:08:55.973 ****** 2025-09-08 00:54:58.769168 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.769174 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.769180 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.769186 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.769192 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.769198 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.769204 | orchestrator | 2025-09-08 00:54:58.769211 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:54:58.769217 | orchestrator | Monday 08 September 2025 00:52:27 +0000 (0:00:00.846) 0:08:56.819 ****** 2025-09-08 00:54:58.769227 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.769233 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.769239 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.769245 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.769251 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.769257 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.769263 | orchestrator | 2025-09-08 00:54:58.769269 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:54:58.769275 | orchestrator | Monday 08 September 2025 00:52:28 +0000 (0:00:00.620) 0:08:57.440 ****** 2025-09-08 00:54:58.769282 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.769288 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.769294 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.769300 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.769306 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.769312 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.769318 | orchestrator | 2025-09-08 00:54:58.769324 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:54:58.769330 | orchestrator | Monday 08 September 2025 00:52:29 +0000 (0:00:00.857) 0:08:58.297 ****** 2025-09-08 00:54:58.769336 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:54:58.769343 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:54:58.769349 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:54:58.769355 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.769361 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.769367 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.769373 | orchestrator | 2025-09-08 00:54:58.769379 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:54:58.769385 | orchestrator | Monday 08 September 2025 00:52:29 +0000 (0:00:00.591) 0:08:58.889 ****** 2025-09-08 00:54:58.769391 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.769397 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.769403 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.769410 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.769416 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.769422 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.769428 | orchestrator | 2025-09-08 00:54:58.769434 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:54:58.769443 | orchestrator | Monday 08 September 2025 00:52:30 +0000 (0:00:00.834) 0:08:59.723 ****** 2025-09-08 00:54:58.769449 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.769455 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.769477 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.769483 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.769489 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.769495 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.769502 | orchestrator | 2025-09-08 00:54:58.769508 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:54:58.769514 | orchestrator | Monday 08 September 2025 00:52:31 +0000 (0:00:00.663) 0:09:00.387 ****** 2025-09-08 00:54:58.769520 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.769526 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.769532 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.769538 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.769544 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.769550 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.769557 | orchestrator | 2025-09-08 00:54:58.769563 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-08 00:54:58.769569 | orchestrator | Monday 08 September 2025 00:52:32 +0000 (0:00:01.290) 0:09:01.677 ****** 2025-09-08 00:54:58.769575 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.769581 | orchestrator | 2025-09-08 00:54:58.769587 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-08 00:54:58.769594 | orchestrator | Monday 08 September 2025 00:52:36 +0000 (0:00:04.079) 0:09:05.757 ****** 2025-09-08 00:54:58.769605 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.769611 | orchestrator | 2025-09-08 00:54:58.769617 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-08 00:54:58.769624 | orchestrator | Monday 08 September 2025 00:52:39 +0000 (0:00:02.497) 0:09:08.255 ****** 2025-09-08 00:54:58.769633 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.769639 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.769646 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.769652 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.769658 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.769664 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.769670 | orchestrator | 2025-09-08 00:54:58.769676 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-08 00:54:58.769682 | orchestrator | Monday 08 September 2025 00:52:41 +0000 (0:00:01.854) 0:09:10.110 ****** 2025-09-08 00:54:58.769689 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.769695 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.769701 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.769707 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.769713 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.769719 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.769725 | orchestrator | 2025-09-08 00:54:58.769731 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-08 00:54:58.769738 | orchestrator | Monday 08 September 2025 00:52:41 +0000 (0:00:00.943) 0:09:11.054 ****** 2025-09-08 00:54:58.769744 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.769750 | orchestrator | 2025-09-08 00:54:58.769756 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-08 00:54:58.769763 | orchestrator | Monday 08 September 2025 00:52:43 +0000 (0:00:01.345) 0:09:12.399 ****** 2025-09-08 00:54:58.769769 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.769775 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.769781 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.769787 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.769793 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.769799 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.769805 | orchestrator | 2025-09-08 00:54:58.769812 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-08 00:54:58.769818 | orchestrator | Monday 08 September 2025 00:52:45 +0000 (0:00:01.795) 0:09:14.194 ****** 2025-09-08 00:54:58.769824 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.769830 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.769836 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.769842 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.769848 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.769854 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.769860 | orchestrator | 2025-09-08 00:54:58.769867 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-08 00:54:58.769873 | orchestrator | Monday 08 September 2025 00:52:48 +0000 (0:00:03.471) 0:09:17.665 ****** 2025-09-08 00:54:58.769879 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.769885 | orchestrator | 2025-09-08 00:54:58.769892 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-08 00:54:58.769898 | orchestrator | Monday 08 September 2025 00:52:49 +0000 (0:00:01.319) 0:09:18.985 ****** 2025-09-08 00:54:58.769904 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.769910 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.769916 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.769923 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.769933 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.769939 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.769945 | orchestrator | 2025-09-08 00:54:58.769951 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-08 00:54:58.769957 | orchestrator | Monday 08 September 2025 00:52:50 +0000 (0:00:00.911) 0:09:19.896 ****** 2025-09-08 00:54:58.769964 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:54:58.769970 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:54:58.769976 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.769982 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:54:58.769988 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.769994 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.770000 | orchestrator | 2025-09-08 00:54:58.770007 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-08 00:54:58.770013 | orchestrator | Monday 08 September 2025 00:52:52 +0000 (0:00:02.099) 0:09:21.996 ****** 2025-09-08 00:54:58.770035 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:54:58.770045 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:54:58.770051 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:54:58.770057 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770064 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770070 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770076 | orchestrator | 2025-09-08 00:54:58.770082 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-08 00:54:58.770089 | orchestrator | 2025-09-08 00:54:58.770095 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:54:58.770101 | orchestrator | Monday 08 September 2025 00:52:54 +0000 (0:00:01.073) 0:09:23.069 ****** 2025-09-08 00:54:58.770107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.770114 | orchestrator | 2025-09-08 00:54:58.770120 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:54:58.770126 | orchestrator | Monday 08 September 2025 00:52:54 +0000 (0:00:00.751) 0:09:23.821 ****** 2025-09-08 00:54:58.770132 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.770138 | orchestrator | 2025-09-08 00:54:58.770145 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:54:58.770151 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.526) 0:09:24.348 ****** 2025-09-08 00:54:58.770157 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770163 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770169 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770175 | orchestrator | 2025-09-08 00:54:58.770185 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:54:58.770191 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.310) 0:09:24.658 ****** 2025-09-08 00:54:58.770197 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770204 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770210 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770216 | orchestrator | 2025-09-08 00:54:58.770222 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:54:58.770228 | orchestrator | Monday 08 September 2025 00:52:56 +0000 (0:00:01.046) 0:09:25.704 ****** 2025-09-08 00:54:58.770235 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770241 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770247 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770253 | orchestrator | 2025-09-08 00:54:58.770259 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:54:58.770265 | orchestrator | Monday 08 September 2025 00:52:57 +0000 (0:00:00.736) 0:09:26.441 ****** 2025-09-08 00:54:58.770272 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770278 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770284 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770294 | orchestrator | 2025-09-08 00:54:58.770300 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:54:58.770307 | orchestrator | Monday 08 September 2025 00:52:58 +0000 (0:00:00.726) 0:09:27.167 ****** 2025-09-08 00:54:58.770313 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770319 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770325 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770331 | orchestrator | 2025-09-08 00:54:58.770338 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:54:58.770344 | orchestrator | Monday 08 September 2025 00:52:58 +0000 (0:00:00.331) 0:09:27.499 ****** 2025-09-08 00:54:58.770350 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770356 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770362 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770368 | orchestrator | 2025-09-08 00:54:58.770374 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:54:58.770381 | orchestrator | Monday 08 September 2025 00:52:59 +0000 (0:00:00.624) 0:09:28.123 ****** 2025-09-08 00:54:58.770387 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770393 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770399 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770405 | orchestrator | 2025-09-08 00:54:58.770411 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:54:58.770417 | orchestrator | Monday 08 September 2025 00:52:59 +0000 (0:00:00.314) 0:09:28.437 ****** 2025-09-08 00:54:58.770424 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770430 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770436 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770442 | orchestrator | 2025-09-08 00:54:58.770448 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:54:58.770454 | orchestrator | Monday 08 September 2025 00:53:00 +0000 (0:00:00.749) 0:09:29.187 ****** 2025-09-08 00:54:58.770473 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770479 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770485 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770492 | orchestrator | 2025-09-08 00:54:58.770498 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:54:58.770504 | orchestrator | Monday 08 September 2025 00:53:00 +0000 (0:00:00.844) 0:09:30.031 ****** 2025-09-08 00:54:58.770510 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770517 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770523 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770529 | orchestrator | 2025-09-08 00:54:58.770535 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:54:58.770541 | orchestrator | Monday 08 September 2025 00:53:01 +0000 (0:00:00.596) 0:09:30.627 ****** 2025-09-08 00:54:58.770548 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770554 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770560 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770566 | orchestrator | 2025-09-08 00:54:58.770572 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:54:58.770578 | orchestrator | Monday 08 September 2025 00:53:01 +0000 (0:00:00.386) 0:09:31.013 ****** 2025-09-08 00:54:58.770584 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770591 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770597 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770603 | orchestrator | 2025-09-08 00:54:58.770612 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:54:58.770619 | orchestrator | Monday 08 September 2025 00:53:02 +0000 (0:00:00.394) 0:09:31.408 ****** 2025-09-08 00:54:58.770625 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770631 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770637 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770643 | orchestrator | 2025-09-08 00:54:58.770649 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:54:58.770662 | orchestrator | Monday 08 September 2025 00:53:02 +0000 (0:00:00.383) 0:09:31.792 ****** 2025-09-08 00:54:58.770668 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770674 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770680 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770686 | orchestrator | 2025-09-08 00:54:58.770692 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:54:58.770699 | orchestrator | Monday 08 September 2025 00:53:03 +0000 (0:00:00.676) 0:09:32.468 ****** 2025-09-08 00:54:58.770705 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770711 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770717 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770723 | orchestrator | 2025-09-08 00:54:58.770729 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:54:58.770735 | orchestrator | Monday 08 September 2025 00:53:03 +0000 (0:00:00.351) 0:09:32.819 ****** 2025-09-08 00:54:58.770742 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770748 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770754 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770760 | orchestrator | 2025-09-08 00:54:58.770766 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:54:58.770775 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:00.328) 0:09:33.148 ****** 2025-09-08 00:54:58.770781 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770788 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770794 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770800 | orchestrator | 2025-09-08 00:54:58.770806 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:54:58.770812 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:00.286) 0:09:33.434 ****** 2025-09-08 00:54:58.770818 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770824 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770830 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770837 | orchestrator | 2025-09-08 00:54:58.770843 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:54:58.770849 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:00.620) 0:09:34.054 ****** 2025-09-08 00:54:58.770855 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.770861 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.770867 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.770873 | orchestrator | 2025-09-08 00:54:58.770880 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-08 00:54:58.770886 | orchestrator | Monday 08 September 2025 00:53:05 +0000 (0:00:00.468) 0:09:34.522 ****** 2025-09-08 00:54:58.770892 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.770898 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.770904 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-08 00:54:58.770910 | orchestrator | 2025-09-08 00:54:58.770917 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-08 00:54:58.770923 | orchestrator | Monday 08 September 2025 00:53:05 +0000 (0:00:00.401) 0:09:34.924 ****** 2025-09-08 00:54:58.770929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:54:58.770935 | orchestrator | 2025-09-08 00:54:58.770941 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-08 00:54:58.770947 | orchestrator | Monday 08 September 2025 00:53:08 +0000 (0:00:02.452) 0:09:37.377 ****** 2025-09-08 00:54:58.770954 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-08 00:54:58.770963 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.770969 | orchestrator | 2025-09-08 00:54:58.770975 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-08 00:54:58.770986 | orchestrator | Monday 08 September 2025 00:53:08 +0000 (0:00:00.179) 0:09:37.557 ****** 2025-09-08 00:54:58.770993 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:54:58.771005 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:54:58.771012 | orchestrator | 2025-09-08 00:54:58.771018 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-08 00:54:58.771024 | orchestrator | Monday 08 September 2025 00:53:17 +0000 (0:00:08.785) 0:09:46.342 ****** 2025-09-08 00:54:58.771031 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-08 00:54:58.771037 | orchestrator | 2025-09-08 00:54:58.771043 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-08 00:54:58.771049 | orchestrator | Monday 08 September 2025 00:53:20 +0000 (0:00:03.635) 0:09:49.978 ****** 2025-09-08 00:54:58.771055 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.771062 | orchestrator | 2025-09-08 00:54:58.771071 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-08 00:54:58.771077 | orchestrator | Monday 08 September 2025 00:53:21 +0000 (0:00:00.657) 0:09:50.636 ****** 2025-09-08 00:54:58.771083 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-08 00:54:58.771089 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-08 00:54:58.771096 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-08 00:54:58.771102 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-08 00:54:58.771108 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-08 00:54:58.771114 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-08 00:54:58.771120 | orchestrator | 2025-09-08 00:54:58.771127 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-08 00:54:58.771133 | orchestrator | Monday 08 September 2025 00:53:23 +0000 (0:00:01.992) 0:09:52.629 ****** 2025-09-08 00:54:58.771139 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.771145 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:54:58.771151 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:54:58.771158 | orchestrator | 2025-09-08 00:54:58.771164 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:54:58.771170 | orchestrator | Monday 08 September 2025 00:53:25 +0000 (0:00:02.144) 0:09:54.774 ****** 2025-09-08 00:54:58.771179 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:54:58.771186 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:54:58.771192 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771198 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:54:58.771204 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-08 00:54:58.771211 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771217 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:54:58.771223 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-08 00:54:58.771229 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771236 | orchestrator | 2025-09-08 00:54:58.771242 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-08 00:54:58.771248 | orchestrator | Monday 08 September 2025 00:53:27 +0000 (0:00:01.374) 0:09:56.148 ****** 2025-09-08 00:54:58.771259 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771265 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771271 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771277 | orchestrator | 2025-09-08 00:54:58.771283 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-08 00:54:58.771289 | orchestrator | Monday 08 September 2025 00:53:29 +0000 (0:00:02.800) 0:09:58.948 ****** 2025-09-08 00:54:58.771296 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.771302 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.771308 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.771314 | orchestrator | 2025-09-08 00:54:58.771320 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-08 00:54:58.771327 | orchestrator | Monday 08 September 2025 00:53:30 +0000 (0:00:00.378) 0:09:59.326 ****** 2025-09-08 00:54:58.771333 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.771339 | orchestrator | 2025-09-08 00:54:58.771345 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-08 00:54:58.771351 | orchestrator | Monday 08 September 2025 00:53:31 +0000 (0:00:00.895) 0:10:00.222 ****** 2025-09-08 00:54:58.771358 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.771364 | orchestrator | 2025-09-08 00:54:58.771370 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-08 00:54:58.771376 | orchestrator | Monday 08 September 2025 00:53:31 +0000 (0:00:00.555) 0:10:00.778 ****** 2025-09-08 00:54:58.771382 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771389 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771395 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771401 | orchestrator | 2025-09-08 00:54:58.771407 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-08 00:54:58.771413 | orchestrator | Monday 08 September 2025 00:53:33 +0000 (0:00:01.740) 0:10:02.518 ****** 2025-09-08 00:54:58.771419 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771426 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771432 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771438 | orchestrator | 2025-09-08 00:54:58.771444 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-08 00:54:58.771450 | orchestrator | Monday 08 September 2025 00:53:34 +0000 (0:00:01.373) 0:10:03.892 ****** 2025-09-08 00:54:58.771457 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771495 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771502 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771508 | orchestrator | 2025-09-08 00:54:58.771514 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-08 00:54:58.771521 | orchestrator | Monday 08 September 2025 00:53:37 +0000 (0:00:02.688) 0:10:06.580 ****** 2025-09-08 00:54:58.771527 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771533 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771539 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771545 | orchestrator | 2025-09-08 00:54:58.771551 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-08 00:54:58.771558 | orchestrator | Monday 08 September 2025 00:53:39 +0000 (0:00:02.016) 0:10:08.596 ****** 2025-09-08 00:54:58.771564 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.771570 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.771576 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.771582 | orchestrator | 2025-09-08 00:54:58.771591 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:54:58.771598 | orchestrator | Monday 08 September 2025 00:53:41 +0000 (0:00:01.562) 0:10:10.159 ****** 2025-09-08 00:54:58.771604 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771610 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771616 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771629 | orchestrator | 2025-09-08 00:54:58.771635 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-08 00:54:58.771641 | orchestrator | Monday 08 September 2025 00:53:41 +0000 (0:00:00.662) 0:10:10.821 ****** 2025-09-08 00:54:58.771648 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.771654 | orchestrator | 2025-09-08 00:54:58.771660 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-08 00:54:58.771666 | orchestrator | Monday 08 September 2025 00:53:42 +0000 (0:00:00.806) 0:10:11.628 ****** 2025-09-08 00:54:58.771672 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.771678 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.771685 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.771691 | orchestrator | 2025-09-08 00:54:58.771697 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-08 00:54:58.771703 | orchestrator | Monday 08 September 2025 00:53:42 +0000 (0:00:00.336) 0:10:11.964 ****** 2025-09-08 00:54:58.771709 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.771715 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.771721 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.771727 | orchestrator | 2025-09-08 00:54:58.771733 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-08 00:54:58.771743 | orchestrator | Monday 08 September 2025 00:53:44 +0000 (0:00:01.393) 0:10:13.358 ****** 2025-09-08 00:54:58.771749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.771756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.771761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.771766 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.771772 | orchestrator | 2025-09-08 00:54:58.771777 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-08 00:54:58.771782 | orchestrator | Monday 08 September 2025 00:53:45 +0000 (0:00:01.198) 0:10:14.556 ****** 2025-09-08 00:54:58.771788 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.771793 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.771798 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.771804 | orchestrator | 2025-09-08 00:54:58.771809 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-08 00:54:58.771815 | orchestrator | 2025-09-08 00:54:58.771820 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-08 00:54:58.771825 | orchestrator | Monday 08 September 2025 00:53:46 +0000 (0:00:00.624) 0:10:15.181 ****** 2025-09-08 00:54:58.771831 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.771836 | orchestrator | 2025-09-08 00:54:58.771842 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-08 00:54:58.771847 | orchestrator | Monday 08 September 2025 00:53:46 +0000 (0:00:00.780) 0:10:15.962 ****** 2025-09-08 00:54:58.771853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.771858 | orchestrator | 2025-09-08 00:54:58.771864 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-08 00:54:58.771869 | orchestrator | Monday 08 September 2025 00:53:47 +0000 (0:00:00.565) 0:10:16.527 ****** 2025-09-08 00:54:58.771874 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.771880 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.771885 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.771890 | orchestrator | 2025-09-08 00:54:58.771896 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-08 00:54:58.771901 | orchestrator | Monday 08 September 2025 00:53:47 +0000 (0:00:00.322) 0:10:16.850 ****** 2025-09-08 00:54:58.771906 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.771912 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.771921 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.771927 | orchestrator | 2025-09-08 00:54:58.771932 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-08 00:54:58.771938 | orchestrator | Monday 08 September 2025 00:53:48 +0000 (0:00:00.954) 0:10:17.804 ****** 2025-09-08 00:54:58.771943 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.771948 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.771954 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.771959 | orchestrator | 2025-09-08 00:54:58.771964 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-08 00:54:58.771970 | orchestrator | Monday 08 September 2025 00:53:49 +0000 (0:00:00.753) 0:10:18.557 ****** 2025-09-08 00:54:58.771975 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.771981 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.771986 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.771991 | orchestrator | 2025-09-08 00:54:58.771997 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-08 00:54:58.772002 | orchestrator | Monday 08 September 2025 00:53:50 +0000 (0:00:00.743) 0:10:19.301 ****** 2025-09-08 00:54:58.772007 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772013 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772018 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772024 | orchestrator | 2025-09-08 00:54:58.772029 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-08 00:54:58.772034 | orchestrator | Monday 08 September 2025 00:53:50 +0000 (0:00:00.336) 0:10:19.638 ****** 2025-09-08 00:54:58.772040 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772045 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772051 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772056 | orchestrator | 2025-09-08 00:54:58.772061 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-08 00:54:58.772069 | orchestrator | Monday 08 September 2025 00:53:51 +0000 (0:00:00.566) 0:10:20.204 ****** 2025-09-08 00:54:58.772075 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772080 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772086 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772091 | orchestrator | 2025-09-08 00:54:58.772096 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-08 00:54:58.772102 | orchestrator | Monday 08 September 2025 00:53:51 +0000 (0:00:00.330) 0:10:20.534 ****** 2025-09-08 00:54:58.772107 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.772113 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.772118 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.772123 | orchestrator | 2025-09-08 00:54:58.772129 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-08 00:54:58.772134 | orchestrator | Monday 08 September 2025 00:53:52 +0000 (0:00:00.750) 0:10:21.285 ****** 2025-09-08 00:54:58.772140 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.772145 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.772150 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.772156 | orchestrator | 2025-09-08 00:54:58.772161 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-08 00:54:58.772167 | orchestrator | Monday 08 September 2025 00:53:52 +0000 (0:00:00.704) 0:10:21.990 ****** 2025-09-08 00:54:58.772172 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772177 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772183 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772188 | orchestrator | 2025-09-08 00:54:58.772194 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-08 00:54:58.772199 | orchestrator | Monday 08 September 2025 00:53:53 +0000 (0:00:00.599) 0:10:22.589 ****** 2025-09-08 00:54:58.772204 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772213 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772218 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772228 | orchestrator | 2025-09-08 00:54:58.772234 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-08 00:54:58.772239 | orchestrator | Monday 08 September 2025 00:53:53 +0000 (0:00:00.319) 0:10:22.908 ****** 2025-09-08 00:54:58.772244 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.772250 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.772255 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.772260 | orchestrator | 2025-09-08 00:54:58.772266 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-08 00:54:58.772271 | orchestrator | Monday 08 September 2025 00:53:54 +0000 (0:00:00.353) 0:10:23.262 ****** 2025-09-08 00:54:58.772276 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.772282 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.772287 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.772292 | orchestrator | 2025-09-08 00:54:58.772298 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-08 00:54:58.772303 | orchestrator | Monday 08 September 2025 00:53:54 +0000 (0:00:00.344) 0:10:23.606 ****** 2025-09-08 00:54:58.772308 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.772314 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.772319 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.772325 | orchestrator | 2025-09-08 00:54:58.772330 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-08 00:54:58.772335 | orchestrator | Monday 08 September 2025 00:53:55 +0000 (0:00:00.571) 0:10:24.177 ****** 2025-09-08 00:54:58.772341 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772346 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772351 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772357 | orchestrator | 2025-09-08 00:54:58.772362 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-08 00:54:58.772368 | orchestrator | Monday 08 September 2025 00:53:55 +0000 (0:00:00.337) 0:10:24.514 ****** 2025-09-08 00:54:58.772373 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772378 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772384 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772389 | orchestrator | 2025-09-08 00:54:58.772394 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-08 00:54:58.772400 | orchestrator | Monday 08 September 2025 00:53:55 +0000 (0:00:00.344) 0:10:24.858 ****** 2025-09-08 00:54:58.772405 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772410 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772416 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772421 | orchestrator | 2025-09-08 00:54:58.772426 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-08 00:54:58.772432 | orchestrator | Monday 08 September 2025 00:53:56 +0000 (0:00:00.322) 0:10:25.180 ****** 2025-09-08 00:54:58.772437 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.772442 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.772448 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.772453 | orchestrator | 2025-09-08 00:54:58.772458 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-08 00:54:58.772477 | orchestrator | Monday 08 September 2025 00:53:56 +0000 (0:00:00.334) 0:10:25.515 ****** 2025-09-08 00:54:58.772482 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.772487 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.772493 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.772498 | orchestrator | 2025-09-08 00:54:58.772503 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-08 00:54:58.772509 | orchestrator | Monday 08 September 2025 00:53:57 +0000 (0:00:00.833) 0:10:26.348 ****** 2025-09-08 00:54:58.772514 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.772520 | orchestrator | 2025-09-08 00:54:58.772525 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-08 00:54:58.772530 | orchestrator | Monday 08 September 2025 00:53:57 +0000 (0:00:00.514) 0:10:26.862 ****** 2025-09-08 00:54:58.772540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.772545 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:54:58.772551 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:54:58.772556 | orchestrator | 2025-09-08 00:54:58.772561 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:54:58.772569 | orchestrator | Monday 08 September 2025 00:54:00 +0000 (0:00:02.657) 0:10:29.520 ****** 2025-09-08 00:54:58.772575 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:54:58.772580 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-08 00:54:58.772586 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.772591 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:54:58.772597 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-08 00:54:58.772602 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.772607 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:54:58.772613 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-08 00:54:58.772618 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.772623 | orchestrator | 2025-09-08 00:54:58.772629 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-08 00:54:58.772634 | orchestrator | Monday 08 September 2025 00:54:01 +0000 (0:00:01.209) 0:10:30.729 ****** 2025-09-08 00:54:58.772640 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772645 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.772650 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.772656 | orchestrator | 2025-09-08 00:54:58.772661 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-08 00:54:58.772666 | orchestrator | Monday 08 September 2025 00:54:02 +0000 (0:00:00.326) 0:10:31.055 ****** 2025-09-08 00:54:58.772672 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.772677 | orchestrator | 2025-09-08 00:54:58.772682 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-08 00:54:58.772691 | orchestrator | Monday 08 September 2025 00:54:02 +0000 (0:00:00.755) 0:10:31.811 ****** 2025-09-08 00:54:58.772697 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.772702 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.772708 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.772713 | orchestrator | 2025-09-08 00:54:58.772718 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-08 00:54:58.772724 | orchestrator | Monday 08 September 2025 00:54:03 +0000 (0:00:00.820) 0:10:32.632 ****** 2025-09-08 00:54:58.772729 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.772734 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-08 00:54:58.772740 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.772745 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-08 00:54:58.772751 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.772756 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-08 00:54:58.772794 | orchestrator | 2025-09-08 00:54:58.772799 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-08 00:54:58.772805 | orchestrator | Monday 08 September 2025 00:54:08 +0000 (0:00:04.638) 0:10:37.270 ****** 2025-09-08 00:54:58.772810 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.772816 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:54:58.772821 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.772826 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:54:58.772832 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:54:58.772837 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:54:58.772843 | orchestrator | 2025-09-08 00:54:58.772848 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-08 00:54:58.772853 | orchestrator | Monday 08 September 2025 00:54:10 +0000 (0:00:02.391) 0:10:39.662 ****** 2025-09-08 00:54:58.772859 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 00:54:58.772864 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.772869 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 00:54:58.772875 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.772880 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 00:54:58.772886 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.772891 | orchestrator | 2025-09-08 00:54:58.772896 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-08 00:54:58.772902 | orchestrator | Monday 08 September 2025 00:54:12 +0000 (0:00:01.498) 0:10:41.160 ****** 2025-09-08 00:54:58.772907 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-08 00:54:58.772912 | orchestrator | 2025-09-08 00:54:58.772918 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-08 00:54:58.772923 | orchestrator | Monday 08 September 2025 00:54:12 +0000 (0:00:00.246) 0:10:41.407 ****** 2025-09-08 00:54:58.772932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772959 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.772965 | orchestrator | 2025-09-08 00:54:58.772970 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-08 00:54:58.772975 | orchestrator | Monday 08 September 2025 00:54:12 +0000 (0:00:00.625) 0:10:42.033 ****** 2025-09-08 00:54:58.772981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.772997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.773003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-08 00:54:58.773011 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.773017 | orchestrator | 2025-09-08 00:54:58.773022 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-08 00:54:58.773028 | orchestrator | Monday 08 September 2025 00:54:13 +0000 (0:00:00.655) 0:10:42.689 ****** 2025-09-08 00:54:58.773033 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:54:58.773039 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:54:58.773044 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:54:58.773049 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:54:58.773055 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-08 00:54:58.773060 | orchestrator | 2025-09-08 00:54:58.773066 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-08 00:54:58.773071 | orchestrator | Monday 08 September 2025 00:54:45 +0000 (0:00:31.405) 0:11:14.095 ****** 2025-09-08 00:54:58.773077 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.773082 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.773087 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.773093 | orchestrator | 2025-09-08 00:54:58.773098 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-08 00:54:58.773104 | orchestrator | Monday 08 September 2025 00:54:45 +0000 (0:00:00.312) 0:11:14.407 ****** 2025-09-08 00:54:58.773109 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.773114 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.773120 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.773125 | orchestrator | 2025-09-08 00:54:58.773130 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-08 00:54:58.773136 | orchestrator | Monday 08 September 2025 00:54:45 +0000 (0:00:00.336) 0:11:14.744 ****** 2025-09-08 00:54:58.773141 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.773146 | orchestrator | 2025-09-08 00:54:58.773152 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-08 00:54:58.773157 | orchestrator | Monday 08 September 2025 00:54:46 +0000 (0:00:00.868) 0:11:15.612 ****** 2025-09-08 00:54:58.773163 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.773168 | orchestrator | 2025-09-08 00:54:58.773173 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-08 00:54:58.773179 | orchestrator | Monday 08 September 2025 00:54:47 +0000 (0:00:00.531) 0:11:16.144 ****** 2025-09-08 00:54:58.773184 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.773189 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.773195 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.773200 | orchestrator | 2025-09-08 00:54:58.773205 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-08 00:54:58.773211 | orchestrator | Monday 08 September 2025 00:54:48 +0000 (0:00:01.800) 0:11:17.945 ****** 2025-09-08 00:54:58.773219 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.773224 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.773230 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.773235 | orchestrator | 2025-09-08 00:54:58.773262 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-08 00:54:58.773274 | orchestrator | Monday 08 September 2025 00:54:50 +0000 (0:00:01.174) 0:11:19.119 ****** 2025-09-08 00:54:58.773279 | orchestrator | changed: [testbed-node-3] 2025-09-08 00:54:58.773285 | orchestrator | changed: [testbed-node-4] 2025-09-08 00:54:58.773290 | orchestrator | changed: [testbed-node-5] 2025-09-08 00:54:58.773295 | orchestrator | 2025-09-08 00:54:58.773301 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-08 00:54:58.773306 | orchestrator | Monday 08 September 2025 00:54:51 +0000 (0:00:01.712) 0:11:20.832 ****** 2025-09-08 00:54:58.773312 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.773317 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.773323 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-08 00:54:58.773328 | orchestrator | 2025-09-08 00:54:58.773334 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-08 00:54:58.773339 | orchestrator | Monday 08 September 2025 00:54:54 +0000 (0:00:02.540) 0:11:23.373 ****** 2025-09-08 00:54:58.773347 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.773353 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.773358 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.773364 | orchestrator | 2025-09-08 00:54:58.773369 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-08 00:54:58.773374 | orchestrator | Monday 08 September 2025 00:54:54 +0000 (0:00:00.362) 0:11:23.736 ****** 2025-09-08 00:54:58.773380 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:54:58.773385 | orchestrator | 2025-09-08 00:54:58.773391 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-08 00:54:58.773396 | orchestrator | Monday 08 September 2025 00:54:55 +0000 (0:00:00.815) 0:11:24.551 ****** 2025-09-08 00:54:58.773402 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.773407 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.773413 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.773418 | orchestrator | 2025-09-08 00:54:58.773423 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-08 00:54:58.773429 | orchestrator | Monday 08 September 2025 00:54:55 +0000 (0:00:00.331) 0:11:24.883 ****** 2025-09-08 00:54:58.773434 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.773440 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:54:58.773445 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:54:58.773450 | orchestrator | 2025-09-08 00:54:58.773456 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-08 00:54:58.773473 | orchestrator | Monday 08 September 2025 00:54:56 +0000 (0:00:00.403) 0:11:25.287 ****** 2025-09-08 00:54:58.773479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:54:58.773484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:54:58.773490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:54:58.773495 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:54:58.773500 | orchestrator | 2025-09-08 00:54:58.773506 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-08 00:54:58.773511 | orchestrator | Monday 08 September 2025 00:54:57 +0000 (0:00:00.908) 0:11:26.195 ****** 2025-09-08 00:54:58.773517 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:54:58.773522 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:54:58.773528 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:54:58.773533 | orchestrator | 2025-09-08 00:54:58.773538 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:54:58.773544 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-08 00:54:58.773553 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-08 00:54:58.773559 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-08 00:54:58.773565 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-08 00:54:58.773570 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-08 00:54:58.773576 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-08 00:54:58.773581 | orchestrator | 2025-09-08 00:54:58.773587 | orchestrator | 2025-09-08 00:54:58.773592 | orchestrator | 2025-09-08 00:54:58.773598 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:54:58.773603 | orchestrator | Monday 08 September 2025 00:54:57 +0000 (0:00:00.249) 0:11:26.444 ****** 2025-09-08 00:54:58.773609 | orchestrator | =============================================================================== 2025-09-08 00:54:58.773614 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 69.01s 2025-09-08 00:54:58.773623 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 45.61s 2025-09-08 00:54:58.773628 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.41s 2025-09-08 00:54:58.773634 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.22s 2025-09-08 00:54:58.773639 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.90s 2025-09-08 00:54:58.773645 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.62s 2025-09-08 00:54:58.773650 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.33s 2025-09-08 00:54:58.773655 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.85s 2025-09-08 00:54:58.773661 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.79s 2025-09-08 00:54:58.773666 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.67s 2025-09-08 00:54:58.773672 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.37s 2025-09-08 00:54:58.773677 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 5.03s 2025-09-08 00:54:58.773682 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.89s 2025-09-08 00:54:58.773688 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.64s 2025-09-08 00:54:58.773693 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.08s 2025-09-08 00:54:58.773701 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 3.73s 2025-09-08 00:54:58.773707 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.64s 2025-09-08 00:54:58.773712 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.48s 2025-09-08 00:54:58.773718 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.47s 2025-09-08 00:54:58.773723 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.32s 2025-09-08 00:54:58.773729 | orchestrator | 2025-09-08 00:54:58 | INFO  | Task 9f46d043-3dec-4d09-8767-83d5df57174d is in state SUCCESS 2025-09-08 00:54:58.773734 | orchestrator | 2025-09-08 00:54:58 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:54:58.773740 | orchestrator | 2025-09-08 00:54:58 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:54:58.773745 | orchestrator | 2025-09-08 00:54:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:01.800285 | orchestrator | 2025-09-08 00:55:01 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:01.802978 | orchestrator | 2025-09-08 00:55:01 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:01.805634 | orchestrator | 2025-09-08 00:55:01 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:01.806106 | orchestrator | 2025-09-08 00:55:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:04.862886 | orchestrator | 2025-09-08 00:55:04 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:04.864506 | orchestrator | 2025-09-08 00:55:04 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:04.866994 | orchestrator | 2025-09-08 00:55:04 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:04.867278 | orchestrator | 2025-09-08 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:07.918338 | orchestrator | 2025-09-08 00:55:07 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:07.920383 | orchestrator | 2025-09-08 00:55:07 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:07.922590 | orchestrator | 2025-09-08 00:55:07 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:07.922619 | orchestrator | 2025-09-08 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:10.963958 | orchestrator | 2025-09-08 00:55:10 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:10.965665 | orchestrator | 2025-09-08 00:55:10 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:10.967517 | orchestrator | 2025-09-08 00:55:10 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:10.967550 | orchestrator | 2025-09-08 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:14.024138 | orchestrator | 2025-09-08 00:55:14 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:14.025559 | orchestrator | 2025-09-08 00:55:14 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:14.027345 | orchestrator | 2025-09-08 00:55:14 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:14.027364 | orchestrator | 2025-09-08 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:17.079393 | orchestrator | 2025-09-08 00:55:17 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:17.079936 | orchestrator | 2025-09-08 00:55:17 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:17.082112 | orchestrator | 2025-09-08 00:55:17 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:17.082134 | orchestrator | 2025-09-08 00:55:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:20.132605 | orchestrator | 2025-09-08 00:55:20 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:20.134007 | orchestrator | 2025-09-08 00:55:20 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:20.135644 | orchestrator | 2025-09-08 00:55:20 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:20.137371 | orchestrator | 2025-09-08 00:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:23.180662 | orchestrator | 2025-09-08 00:55:23 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:23.184727 | orchestrator | 2025-09-08 00:55:23 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:23.187051 | orchestrator | 2025-09-08 00:55:23 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:23.187073 | orchestrator | 2025-09-08 00:55:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:26.247866 | orchestrator | 2025-09-08 00:55:26 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:26.248722 | orchestrator | 2025-09-08 00:55:26 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:26.251531 | orchestrator | 2025-09-08 00:55:26 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:26.251553 | orchestrator | 2025-09-08 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:29.294302 | orchestrator | 2025-09-08 00:55:29 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:29.298374 | orchestrator | 2025-09-08 00:55:29 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:29.299406 | orchestrator | 2025-09-08 00:55:29 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:29.300066 | orchestrator | 2025-09-08 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:32.360388 | orchestrator | 2025-09-08 00:55:32 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:32.363228 | orchestrator | 2025-09-08 00:55:32 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:32.364083 | orchestrator | 2025-09-08 00:55:32 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:32.364355 | orchestrator | 2025-09-08 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:35.413844 | orchestrator | 2025-09-08 00:55:35 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:35.416216 | orchestrator | 2025-09-08 00:55:35 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:35.417742 | orchestrator | 2025-09-08 00:55:35 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:35.417772 | orchestrator | 2025-09-08 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:38.456087 | orchestrator | 2025-09-08 00:55:38 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:38.459866 | orchestrator | 2025-09-08 00:55:38 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:38.459912 | orchestrator | 2025-09-08 00:55:38 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:38.459925 | orchestrator | 2025-09-08 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:41.512088 | orchestrator | 2025-09-08 00:55:41 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:41.513091 | orchestrator | 2025-09-08 00:55:41 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:41.515333 | orchestrator | 2025-09-08 00:55:41 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:41.516061 | orchestrator | 2025-09-08 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:44.558583 | orchestrator | 2025-09-08 00:55:44 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:44.560548 | orchestrator | 2025-09-08 00:55:44 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:44.562583 | orchestrator | 2025-09-08 00:55:44 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:44.562602 | orchestrator | 2025-09-08 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:47.618673 | orchestrator | 2025-09-08 00:55:47 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:47.619597 | orchestrator | 2025-09-08 00:55:47 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:47.621765 | orchestrator | 2025-09-08 00:55:47 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:47.621786 | orchestrator | 2025-09-08 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:50.670934 | orchestrator | 2025-09-08 00:55:50 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:50.672311 | orchestrator | 2025-09-08 00:55:50 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:50.674747 | orchestrator | 2025-09-08 00:55:50 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state STARTED 2025-09-08 00:55:50.674930 | orchestrator | 2025-09-08 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:53.716986 | orchestrator | 2025-09-08 00:55:53 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:53.718785 | orchestrator | 2025-09-08 00:55:53 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:53.721158 | orchestrator | 2025-09-08 00:55:53 | INFO  | Task 0603ed00-8f27-4afe-91a4-3b90cea04da3 is in state SUCCESS 2025-09-08 00:55:53.723674 | orchestrator | 2025-09-08 00:55:53.723711 | orchestrator | 2025-09-08 00:55:53.723723 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:55:53.723735 | orchestrator | 2025-09-08 00:55:53.723747 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:55:53.723759 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.304) 0:00:00.304 ****** 2025-09-08 00:55:53.723770 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:53.723782 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:55:53.723794 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:55:53.723805 | orchestrator | 2025-09-08 00:55:53.723816 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:55:53.723827 | orchestrator | Monday 08 September 2025 00:52:56 +0000 (0:00:00.298) 0:00:00.603 ****** 2025-09-08 00:55:53.723839 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-08 00:55:53.723851 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-08 00:55:53.723862 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-08 00:55:53.723873 | orchestrator | 2025-09-08 00:55:53.723884 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-08 00:55:53.723895 | orchestrator | 2025-09-08 00:55:53.723905 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:55:53.723918 | orchestrator | Monday 08 September 2025 00:52:56 +0000 (0:00:00.428) 0:00:01.032 ****** 2025-09-08 00:55:53.723929 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:53.723940 | orchestrator | 2025-09-08 00:55:53.723951 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-08 00:55:53.723962 | orchestrator | Monday 08 September 2025 00:52:57 +0000 (0:00:00.508) 0:00:01.541 ****** 2025-09-08 00:55:53.723973 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:55:53.724013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:55:53.724024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-08 00:55:53.724035 | orchestrator | 2025-09-08 00:55:53.724046 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-08 00:55:53.724056 | orchestrator | Monday 08 September 2025 00:52:57 +0000 (0:00:00.687) 0:00:02.228 ****** 2025-09-08 00:55:53.724071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724191 | orchestrator | 2025-09-08 00:55:53.724208 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:55:53.724219 | orchestrator | Monday 08 September 2025 00:52:59 +0000 (0:00:01.728) 0:00:03.957 ****** 2025-09-08 00:55:53.724231 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:53.724242 | orchestrator | 2025-09-08 00:55:53.724255 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-08 00:55:53.724270 | orchestrator | Monday 08 September 2025 00:52:59 +0000 (0:00:00.546) 0:00:04.503 ****** 2025-09-08 00:55:53.724301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724477 | orchestrator | 2025-09-08 00:55:53.724492 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-08 00:55:53.724505 | orchestrator | Monday 08 September 2025 00:53:02 +0000 (0:00:02.758) 0:00:07.261 ****** 2025-09-08 00:55:53.724518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:55:53.724538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:55:53.724552 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:53.724573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:55:53.724588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:55:53.724609 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:53.724621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:55:53.724633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:55:53.724644 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:53.724655 | orchestrator | 2025-09-08 00:55:53.724671 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-08 00:55:53.724682 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:01.706) 0:00:08.967 ****** 2025-09-08 00:55:53.724700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:55:53.724719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:55:53.724731 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:53.724742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:55:53.724754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:55:53.724766 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:53.724786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-08 00:55:53.724810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-08 00:55:53.724822 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:53.724833 | orchestrator | 2025-09-08 00:55:53.724844 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-08 00:55:53.724855 | orchestrator | Monday 08 September 2025 00:53:05 +0000 (0:00:00.964) 0:00:09.932 ****** 2025-09-08 00:55:53.724866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.724928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.724966 | orchestrator | 2025-09-08 00:55:53.724977 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-08 00:55:53.724987 | orchestrator | Monday 08 September 2025 00:53:07 +0000 (0:00:02.268) 0:00:12.201 ****** 2025-09-08 00:55:53.724998 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:53.725009 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:53.725020 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:53.725031 | orchestrator | 2025-09-08 00:55:53.725047 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-08 00:55:53.725058 | orchestrator | Monday 08 September 2025 00:53:10 +0000 (0:00:03.088) 0:00:15.289 ****** 2025-09-08 00:55:53.725068 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:53.725086 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:53.725097 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:53.725108 | orchestrator | 2025-09-08 00:55:53.725119 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-08 00:55:53.725129 | orchestrator | Monday 08 September 2025 00:53:12 +0000 (0:00:01.530) 0:00:16.820 ****** 2025-09-08 00:55:53.725149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.725161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.725173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-08 00:55:53.725185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.725215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.725228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-08 00:55:53.725239 | orchestrator | 2025-09-08 00:55:53.725250 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:55:53.725261 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:01.890) 0:00:18.711 ****** 2025-09-08 00:55:53.725280 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:53.725300 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:55:53.725319 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:55:53.725336 | orchestrator | 2025-09-08 00:55:53.725357 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-08 00:55:53.725376 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:00.305) 0:00:19.016 ****** 2025-09-08 00:55:53.725393 | orchestrator | 2025-09-08 00:55:53.725405 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-08 00:55:53.725416 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:00.067) 0:00:19.084 ****** 2025-09-08 00:55:53.725426 | orchestrator | 2025-09-08 00:55:53.725437 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-08 00:55:53.725476 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:00.068) 0:00:19.152 ****** 2025-09-08 00:55:53.725488 | orchestrator | 2025-09-08 00:55:53.725498 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-08 00:55:53.725509 | orchestrator | Monday 08 September 2025 00:53:14 +0000 (0:00:00.238) 0:00:19.391 ****** 2025-09-08 00:55:53.725520 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:53.725531 | orchestrator | 2025-09-08 00:55:53.725541 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-08 00:55:53.725552 | orchestrator | Monday 08 September 2025 00:53:15 +0000 (0:00:00.245) 0:00:19.637 ****** 2025-09-08 00:55:53.725563 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:55:53.725574 | orchestrator | 2025-09-08 00:55:53.725593 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-08 00:55:53.725604 | orchestrator | Monday 08 September 2025 00:53:15 +0000 (0:00:00.206) 0:00:19.843 ****** 2025-09-08 00:55:53.725614 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:53.725625 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:53.725636 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:53.725647 | orchestrator | 2025-09-08 00:55:53.725658 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-08 00:55:53.725668 | orchestrator | Monday 08 September 2025 00:54:23 +0000 (0:01:07.939) 0:01:27.782 ****** 2025-09-08 00:55:53.725679 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:53.725690 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:55:53.725701 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:55:53.725711 | orchestrator | 2025-09-08 00:55:53.725722 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-08 00:55:53.725733 | orchestrator | Monday 08 September 2025 00:55:42 +0000 (0:01:19.716) 0:02:47.498 ****** 2025-09-08 00:55:53.725744 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:55:53.725755 | orchestrator | 2025-09-08 00:55:53.725771 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-08 00:55:53.725782 | orchestrator | Monday 08 September 2025 00:55:43 +0000 (0:00:00.741) 0:02:48.240 ****** 2025-09-08 00:55:53.725793 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:53.725804 | orchestrator | 2025-09-08 00:55:53.725815 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-08 00:55:53.725825 | orchestrator | Monday 08 September 2025 00:55:46 +0000 (0:00:02.406) 0:02:50.647 ****** 2025-09-08 00:55:53.725836 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:55:53.725847 | orchestrator | 2025-09-08 00:55:53.725857 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-08 00:55:53.725868 | orchestrator | Monday 08 September 2025 00:55:48 +0000 (0:00:02.206) 0:02:52.853 ****** 2025-09-08 00:55:53.725879 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:53.725890 | orchestrator | 2025-09-08 00:55:53.725901 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-08 00:55:53.725911 | orchestrator | Monday 08 September 2025 00:55:50 +0000 (0:00:02.652) 0:02:55.506 ****** 2025-09-08 00:55:53.725922 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:55:53.725933 | orchestrator | 2025-09-08 00:55:53.725950 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:55:53.725962 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 00:55:53.725974 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:55:53.725985 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-08 00:55:53.725996 | orchestrator | 2025-09-08 00:55:53.726007 | orchestrator | 2025-09-08 00:55:53.726064 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:55:53.726078 | orchestrator | Monday 08 September 2025 00:55:53 +0000 (0:00:02.374) 0:02:57.880 ****** 2025-09-08 00:55:53.726089 | orchestrator | =============================================================================== 2025-09-08 00:55:53.726100 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.72s 2025-09-08 00:55:53.726111 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.94s 2025-09-08 00:55:53.726122 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.09s 2025-09-08 00:55:53.726132 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.76s 2025-09-08 00:55:53.726143 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.65s 2025-09-08 00:55:53.726161 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.41s 2025-09-08 00:55:53.726172 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.37s 2025-09-08 00:55:53.726182 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.27s 2025-09-08 00:55:53.726193 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2025-09-08 00:55:53.726204 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.89s 2025-09-08 00:55:53.726214 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.73s 2025-09-08 00:55:53.726225 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.71s 2025-09-08 00:55:53.726236 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.53s 2025-09-08 00:55:53.726247 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.96s 2025-09-08 00:55:53.726257 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2025-09-08 00:55:53.726271 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2025-09-08 00:55:53.726290 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-09-08 00:55:53.726308 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-09-08 00:55:53.726325 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-08 00:55:53.726346 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.37s 2025-09-08 00:55:53.726364 | orchestrator | 2025-09-08 00:55:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:56.771350 | orchestrator | 2025-09-08 00:55:56 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:56.773240 | orchestrator | 2025-09-08 00:55:56 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:56.773293 | orchestrator | 2025-09-08 00:55:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:55:59.818828 | orchestrator | 2025-09-08 00:55:59 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:55:59.821589 | orchestrator | 2025-09-08 00:55:59 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:55:59.821629 | orchestrator | 2025-09-08 00:55:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:02.866938 | orchestrator | 2025-09-08 00:56:02 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:02.867381 | orchestrator | 2025-09-08 00:56:02 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:56:02.868195 | orchestrator | 2025-09-08 00:56:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:05.915157 | orchestrator | 2025-09-08 00:56:05 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:05.917346 | orchestrator | 2025-09-08 00:56:05 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:56:05.917936 | orchestrator | 2025-09-08 00:56:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:08.962643 | orchestrator | 2025-09-08 00:56:08 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:08.964993 | orchestrator | 2025-09-08 00:56:08 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state STARTED 2025-09-08 00:56:08.965028 | orchestrator | 2025-09-08 00:56:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:12.012368 | orchestrator | 2025-09-08 00:56:12 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:12.015906 | orchestrator | 2025-09-08 00:56:12 | INFO  | Task 91e2a541-0123-42ed-8491-269aa18ea01b is in state SUCCESS 2025-09-08 00:56:12.017813 | orchestrator | 2025-09-08 00:56:12.017855 | orchestrator | 2025-09-08 00:56:12.017869 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-08 00:56:12.017882 | orchestrator | 2025-09-08 00:56:12.017893 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-08 00:56:12.017904 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.110) 0:00:00.110 ****** 2025-09-08 00:56:12.017915 | orchestrator | ok: [localhost] => { 2025-09-08 00:56:12.017928 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-08 00:56:12.017940 | orchestrator | } 2025-09-08 00:56:12.017951 | orchestrator | 2025-09-08 00:56:12.017962 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-08 00:56:12.017973 | orchestrator | Monday 08 September 2025 00:52:55 +0000 (0:00:00.064) 0:00:00.175 ****** 2025-09-08 00:56:12.017984 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-08 00:56:12.017997 | orchestrator | ...ignoring 2025-09-08 00:56:12.018008 | orchestrator | 2025-09-08 00:56:12.018092 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-08 00:56:12.018107 | orchestrator | Monday 08 September 2025 00:52:58 +0000 (0:00:02.879) 0:00:03.055 ****** 2025-09-08 00:56:12.018118 | orchestrator | skipping: [localhost] 2025-09-08 00:56:12.018130 | orchestrator | 2025-09-08 00:56:12.018140 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-08 00:56:12.018151 | orchestrator | Monday 08 September 2025 00:52:58 +0000 (0:00:00.058) 0:00:03.113 ****** 2025-09-08 00:56:12.018162 | orchestrator | ok: [localhost] 2025-09-08 00:56:12.018173 | orchestrator | 2025-09-08 00:56:12.018184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:56:12.018194 | orchestrator | 2025-09-08 00:56:12.018205 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:56:12.018216 | orchestrator | Monday 08 September 2025 00:52:58 +0000 (0:00:00.184) 0:00:03.297 ****** 2025-09-08 00:56:12.018227 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.018238 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.018249 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.018260 | orchestrator | 2025-09-08 00:56:12.018270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:56:12.018281 | orchestrator | Monday 08 September 2025 00:52:59 +0000 (0:00:00.310) 0:00:03.607 ****** 2025-09-08 00:56:12.018293 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-08 00:56:12.018305 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-08 00:56:12.018316 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-08 00:56:12.018327 | orchestrator | 2025-09-08 00:56:12.018338 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-08 00:56:12.018349 | orchestrator | 2025-09-08 00:56:12.018360 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-08 00:56:12.018370 | orchestrator | Monday 08 September 2025 00:52:59 +0000 (0:00:00.731) 0:00:04.339 ****** 2025-09-08 00:56:12.018381 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-08 00:56:12.018392 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-08 00:56:12.018403 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-08 00:56:12.018414 | orchestrator | 2025-09-08 00:56:12.018424 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:12.018435 | orchestrator | Monday 08 September 2025 00:53:00 +0000 (0:00:00.469) 0:00:04.808 ****** 2025-09-08 00:56:12.018469 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:12.018498 | orchestrator | 2025-09-08 00:56:12.018510 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-08 00:56:12.018520 | orchestrator | Monday 08 September 2025 00:53:00 +0000 (0:00:00.663) 0:00:05.471 ****** 2025-09-08 00:56:12.018570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.018588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.018607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.018627 | orchestrator | 2025-09-08 00:56:12.018645 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-08 00:56:12.018657 | orchestrator | Monday 08 September 2025 00:53:04 +0000 (0:00:03.757) 0:00:09.229 ****** 2025-09-08 00:56:12.018668 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.018679 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.018689 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.018700 | orchestrator | 2025-09-08 00:56:12.018711 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-08 00:56:12.018722 | orchestrator | Monday 08 September 2025 00:53:05 +0000 (0:00:00.649) 0:00:09.878 ****** 2025-09-08 00:56:12.018732 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.018743 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.018755 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.018765 | orchestrator | 2025-09-08 00:56:12.018776 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-08 00:56:12.018787 | orchestrator | Monday 08 September 2025 00:53:06 +0000 (0:00:01.572) 0:00:11.451 ****** 2025-09-08 00:56:12.018799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.018830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.018844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.018862 | orchestrator | 2025-09-08 00:56:12.018873 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-08 00:56:12.018884 | orchestrator | Monday 08 September 2025 00:53:10 +0000 (0:00:03.327) 0:00:14.779 ****** 2025-09-08 00:56:12.018895 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.018906 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.018916 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.018927 | orchestrator | 2025-09-08 00:56:12.018938 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-08 00:56:12.018949 | orchestrator | Monday 08 September 2025 00:53:11 +0000 (0:00:01.096) 0:00:15.875 ****** 2025-09-08 00:56:12.018960 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:12.018970 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.018981 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:12.018992 | orchestrator | 2025-09-08 00:56:12.019003 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:12.019014 | orchestrator | Monday 08 September 2025 00:53:15 +0000 (0:00:04.032) 0:00:19.908 ****** 2025-09-08 00:56:12.019025 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:12.019036 | orchestrator | 2025-09-08 00:56:12.019046 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-08 00:56:12.019062 | orchestrator | Monday 08 September 2025 00:53:15 +0000 (0:00:00.549) 0:00:20.457 ****** 2025-09-08 00:56:12.019082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019095 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.019107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019125 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.019148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019161 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.019172 | orchestrator | 2025-09-08 00:56:12.019183 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-08 00:56:12.019194 | orchestrator | Monday 08 September 2025 00:53:19 +0000 (0:00:03.475) 0:00:23.933 ****** 2025-09-08 00:56:12.019205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019224 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.019247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019260 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.019272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019290 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.019301 | orchestrator | 2025-09-08 00:56:12.019312 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-08 00:56:12.019323 | orchestrator | Monday 08 September 2025 00:53:22 +0000 (0:00:03.052) 0:00:26.985 ****** 2025-09-08 00:56:12.019431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019470 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.019482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019503 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.019520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-08 00:56:12.019533 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.019544 | orchestrator | 2025-09-08 00:56:12.019555 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-08 00:56:12.019566 | orchestrator | Monday 08 September 2025 00:53:25 +0000 (0:00:03.117) 0:00:30.102 ****** 2025-09-08 00:56:12.019588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.019620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.019642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-08 00:56:12.019662 | orchestrator | 2025-09-08 00:56:12.019674 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-08 00:56:12.019685 | orchestrator | Monday 08 September 2025 00:53:29 +0000 (0:00:04.264) 0:00:34.367 ****** 2025-09-08 00:56:12.019695 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.019706 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:12.019717 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:12.019728 | orchestrator | 2025-09-08 00:56:12.019738 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-08 00:56:12.019749 | orchestrator | Monday 08 September 2025 00:53:31 +0000 (0:00:01.160) 0:00:35.527 ****** 2025-09-08 00:56:12.019760 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.019771 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.019782 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.019793 | orchestrator | 2025-09-08 00:56:12.019804 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-08 00:56:12.019814 | orchestrator | Monday 08 September 2025 00:53:31 +0000 (0:00:00.557) 0:00:36.085 ****** 2025-09-08 00:56:12.019825 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.019836 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.019847 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.019857 | orchestrator | 2025-09-08 00:56:12.019868 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-08 00:56:12.019879 | orchestrator | Monday 08 September 2025 00:53:32 +0000 (0:00:00.440) 0:00:36.525 ****** 2025-09-08 00:56:12.019891 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-08 00:56:12.019902 | orchestrator | ...ignoring 2025-09-08 00:56:12.019913 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-08 00:56:12.019924 | orchestrator | ...ignoring 2025-09-08 00:56:12.019936 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-08 00:56:12.019946 | orchestrator | ...ignoring 2025-09-08 00:56:12.019957 | orchestrator | 2025-09-08 00:56:12.019968 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-08 00:56:12.019984 | orchestrator | Monday 08 September 2025 00:53:43 +0000 (0:00:11.003) 0:00:47.529 ****** 2025-09-08 00:56:12.019995 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.020006 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.020017 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.020028 | orchestrator | 2025-09-08 00:56:12.020038 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-08 00:56:12.020049 | orchestrator | Monday 08 September 2025 00:53:43 +0000 (0:00:00.691) 0:00:48.221 ****** 2025-09-08 00:56:12.020060 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.020072 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.020092 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.020105 | orchestrator | 2025-09-08 00:56:12.020118 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-08 00:56:12.020131 | orchestrator | Monday 08 September 2025 00:53:44 +0000 (0:00:00.434) 0:00:48.655 ****** 2025-09-08 00:56:12.020144 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.020157 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.020170 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.020183 | orchestrator | 2025-09-08 00:56:12.020196 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-08 00:56:12.020209 | orchestrator | Monday 08 September 2025 00:53:44 +0000 (0:00:00.436) 0:00:49.091 ****** 2025-09-08 00:56:12.020222 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.020236 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.020249 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.020262 | orchestrator | 2025-09-08 00:56:12.020276 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-08 00:56:12.020294 | orchestrator | Monday 08 September 2025 00:53:45 +0000 (0:00:00.435) 0:00:49.527 ****** 2025-09-08 00:56:12.020307 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.020320 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.020333 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.020347 | orchestrator | 2025-09-08 00:56:12.020360 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-08 00:56:12.020373 | orchestrator | Monday 08 September 2025 00:53:45 +0000 (0:00:00.869) 0:00:50.397 ****** 2025-09-08 00:56:12.020385 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.020398 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.020411 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.020424 | orchestrator | 2025-09-08 00:56:12.020435 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:12.020478 | orchestrator | Monday 08 September 2025 00:53:46 +0000 (0:00:00.432) 0:00:50.830 ****** 2025-09-08 00:56:12.020489 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.020500 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.020511 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-08 00:56:12.020522 | orchestrator | 2025-09-08 00:56:12.020532 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-08 00:56:12.020543 | orchestrator | Monday 08 September 2025 00:53:46 +0000 (0:00:00.383) 0:00:51.213 ****** 2025-09-08 00:56:12.020554 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.020565 | orchestrator | 2025-09-08 00:56:12.020576 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-08 00:56:12.020587 | orchestrator | Monday 08 September 2025 00:53:57 +0000 (0:00:10.815) 0:01:02.029 ****** 2025-09-08 00:56:12.020598 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.020609 | orchestrator | 2025-09-08 00:56:12.020619 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-08 00:56:12.020630 | orchestrator | Monday 08 September 2025 00:53:57 +0000 (0:00:00.118) 0:01:02.148 ****** 2025-09-08 00:56:12.020641 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.020652 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.020663 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.020674 | orchestrator | 2025-09-08 00:56:12.020684 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-08 00:56:12.020695 | orchestrator | Monday 08 September 2025 00:53:58 +0000 (0:00:00.990) 0:01:03.138 ****** 2025-09-08 00:56:12.020706 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.020717 | orchestrator | 2025-09-08 00:56:12.020728 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-08 00:56:12.020738 | orchestrator | Monday 08 September 2025 00:54:06 +0000 (0:00:07.841) 0:01:10.980 ****** 2025-09-08 00:56:12.020749 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.020760 | orchestrator | 2025-09-08 00:56:12.020778 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-08 00:56:12.020789 | orchestrator | Monday 08 September 2025 00:54:08 +0000 (0:00:01.753) 0:01:12.734 ****** 2025-09-08 00:56:12.020799 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.020810 | orchestrator | 2025-09-08 00:56:12.020821 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-08 00:56:12.020832 | orchestrator | Monday 08 September 2025 00:54:10 +0000 (0:00:02.538) 0:01:15.272 ****** 2025-09-08 00:56:12.020843 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.020853 | orchestrator | 2025-09-08 00:56:12.020864 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-08 00:56:12.020875 | orchestrator | Monday 08 September 2025 00:54:10 +0000 (0:00:00.120) 0:01:15.393 ****** 2025-09-08 00:56:12.020886 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.020897 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.020907 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.020918 | orchestrator | 2025-09-08 00:56:12.020929 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-08 00:56:12.020940 | orchestrator | Monday 08 September 2025 00:54:11 +0000 (0:00:00.536) 0:01:15.930 ****** 2025-09-08 00:56:12.020950 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.020961 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-08 00:56:12.020972 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:12.020983 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:12.020993 | orchestrator | 2025-09-08 00:56:12.021004 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-08 00:56:12.021015 | orchestrator | skipping: no hosts matched 2025-09-08 00:56:12.021026 | orchestrator | 2025-09-08 00:56:12.021041 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-08 00:56:12.021052 | orchestrator | 2025-09-08 00:56:12.021063 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-08 00:56:12.021074 | orchestrator | Monday 08 September 2025 00:54:11 +0000 (0:00:00.332) 0:01:16.262 ****** 2025-09-08 00:56:12.021085 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:56:12.021096 | orchestrator | 2025-09-08 00:56:12.021106 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-08 00:56:12.021117 | orchestrator | Monday 08 September 2025 00:54:35 +0000 (0:00:23.800) 0:01:40.063 ****** 2025-09-08 00:56:12.021128 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.021139 | orchestrator | 2025-09-08 00:56:12.021149 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-08 00:56:12.021160 | orchestrator | Monday 08 September 2025 00:54:52 +0000 (0:00:16.572) 0:01:56.635 ****** 2025-09-08 00:56:12.021171 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.021182 | orchestrator | 2025-09-08 00:56:12.021193 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-08 00:56:12.021203 | orchestrator | 2025-09-08 00:56:12.021214 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-08 00:56:12.021225 | orchestrator | Monday 08 September 2025 00:54:54 +0000 (0:00:02.556) 0:01:59.192 ****** 2025-09-08 00:56:12.021236 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:56:12.021247 | orchestrator | 2025-09-08 00:56:12.021257 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-08 00:56:12.021274 | orchestrator | Monday 08 September 2025 00:55:20 +0000 (0:00:25.468) 0:02:24.661 ****** 2025-09-08 00:56:12.021285 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.021296 | orchestrator | 2025-09-08 00:56:12.021307 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-08 00:56:12.021318 | orchestrator | Monday 08 September 2025 00:55:35 +0000 (0:00:15.616) 0:02:40.277 ****** 2025-09-08 00:56:12.021329 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.021340 | orchestrator | 2025-09-08 00:56:12.021350 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-08 00:56:12.021367 | orchestrator | 2025-09-08 00:56:12.021378 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-08 00:56:12.021389 | orchestrator | Monday 08 September 2025 00:55:38 +0000 (0:00:02.763) 0:02:43.041 ****** 2025-09-08 00:56:12.021400 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.021411 | orchestrator | 2025-09-08 00:56:12.021422 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-08 00:56:12.021433 | orchestrator | Monday 08 September 2025 00:55:50 +0000 (0:00:12.054) 0:02:55.095 ****** 2025-09-08 00:56:12.021461 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.021472 | orchestrator | 2025-09-08 00:56:12.021483 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-08 00:56:12.021494 | orchestrator | Monday 08 September 2025 00:55:55 +0000 (0:00:04.706) 0:02:59.802 ****** 2025-09-08 00:56:12.021504 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.021523 | orchestrator | 2025-09-08 00:56:12.021541 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-08 00:56:12.021568 | orchestrator | 2025-09-08 00:56:12.021589 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-08 00:56:12.021606 | orchestrator | Monday 08 September 2025 00:55:57 +0000 (0:00:02.436) 0:03:02.238 ****** 2025-09-08 00:56:12.021624 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:56:12.021641 | orchestrator | 2025-09-08 00:56:12.021659 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-08 00:56:12.021677 | orchestrator | Monday 08 September 2025 00:55:58 +0000 (0:00:00.552) 0:03:02.791 ****** 2025-09-08 00:56:12.021695 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.021714 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.021734 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.021751 | orchestrator | 2025-09-08 00:56:12.021769 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-08 00:56:12.021781 | orchestrator | Monday 08 September 2025 00:56:00 +0000 (0:00:02.445) 0:03:05.236 ****** 2025-09-08 00:56:12.021792 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.021803 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.021813 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.021824 | orchestrator | 2025-09-08 00:56:12.021835 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-08 00:56:12.021846 | orchestrator | Monday 08 September 2025 00:56:02 +0000 (0:00:02.116) 0:03:07.353 ****** 2025-09-08 00:56:12.021857 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.021868 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.021879 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.021890 | orchestrator | 2025-09-08 00:56:12.021900 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-08 00:56:12.021911 | orchestrator | Monday 08 September 2025 00:56:04 +0000 (0:00:02.097) 0:03:09.450 ****** 2025-09-08 00:56:12.021922 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.021933 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.021944 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:56:12.021955 | orchestrator | 2025-09-08 00:56:12.021966 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-08 00:56:12.021977 | orchestrator | Monday 08 September 2025 00:56:06 +0000 (0:00:02.033) 0:03:11.484 ****** 2025-09-08 00:56:12.021988 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:56:12.021999 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:56:12.022010 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:56:12.022056 | orchestrator | 2025-09-08 00:56:12.022068 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-08 00:56:12.022078 | orchestrator | Monday 08 September 2025 00:56:09 +0000 (0:00:03.004) 0:03:14.488 ****** 2025-09-08 00:56:12.022090 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:56:12.022101 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:56:12.022111 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:56:12.022132 | orchestrator | 2025-09-08 00:56:12.022143 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:56:12.022161 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-08 00:56:12.022172 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-08 00:56:12.022185 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-08 00:56:12.022196 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-08 00:56:12.022207 | orchestrator | 2025-09-08 00:56:12.022218 | orchestrator | 2025-09-08 00:56:12.022228 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:56:12.022239 | orchestrator | Monday 08 September 2025 00:56:10 +0000 (0:00:00.230) 0:03:14.719 ****** 2025-09-08 00:56:12.022250 | orchestrator | =============================================================================== 2025-09-08 00:56:12.022261 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 49.27s 2025-09-08 00:56:12.022272 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.19s 2025-09-08 00:56:12.022292 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.05s 2025-09-08 00:56:12.022303 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.00s 2025-09-08 00:56:12.022314 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.82s 2025-09-08 00:56:12.022325 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.84s 2025-09-08 00:56:12.022336 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.32s 2025-09-08 00:56:12.022346 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.71s 2025-09-08 00:56:12.022357 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.26s 2025-09-08 00:56:12.022368 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.03s 2025-09-08 00:56:12.022379 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.76s 2025-09-08 00:56:12.022389 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.48s 2025-09-08 00:56:12.022400 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.33s 2025-09-08 00:56:12.022411 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.12s 2025-09-08 00:56:12.022422 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.05s 2025-09-08 00:56:12.022432 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.00s 2025-09-08 00:56:12.022465 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2025-09-08 00:56:12.022476 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2025-09-08 00:56:12.022487 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.45s 2025-09-08 00:56:12.022498 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.44s 2025-09-08 00:56:12.022509 | orchestrator | 2025-09-08 00:56:12 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:12.022520 | orchestrator | 2025-09-08 00:56:12 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:12.022531 | orchestrator | 2025-09-08 00:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:15.078238 | orchestrator | 2025-09-08 00:56:15 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:15.078957 | orchestrator | 2025-09-08 00:56:15 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:15.079847 | orchestrator | 2025-09-08 00:56:15 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:15.079867 | orchestrator | 2025-09-08 00:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:18.109834 | orchestrator | 2025-09-08 00:56:18 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:18.111206 | orchestrator | 2025-09-08 00:56:18 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:18.112426 | orchestrator | 2025-09-08 00:56:18 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:18.112688 | orchestrator | 2025-09-08 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:21.157942 | orchestrator | 2025-09-08 00:56:21 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:21.160046 | orchestrator | 2025-09-08 00:56:21 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:21.161912 | orchestrator | 2025-09-08 00:56:21 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:21.162117 | orchestrator | 2025-09-08 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:24.214238 | orchestrator | 2025-09-08 00:56:24 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:24.214909 | orchestrator | 2025-09-08 00:56:24 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:24.216123 | orchestrator | 2025-09-08 00:56:24 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:24.216145 | orchestrator | 2025-09-08 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:27.261179 | orchestrator | 2025-09-08 00:56:27 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:27.263010 | orchestrator | 2025-09-08 00:56:27 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:27.264875 | orchestrator | 2025-09-08 00:56:27 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:27.265752 | orchestrator | 2025-09-08 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:30.298895 | orchestrator | 2025-09-08 00:56:30 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:30.299005 | orchestrator | 2025-09-08 00:56:30 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:30.300219 | orchestrator | 2025-09-08 00:56:30 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:30.300412 | orchestrator | 2025-09-08 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:33.337990 | orchestrator | 2025-09-08 00:56:33 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:33.338327 | orchestrator | 2025-09-08 00:56:33 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:33.339342 | orchestrator | 2025-09-08 00:56:33 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:33.339362 | orchestrator | 2025-09-08 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:36.376740 | orchestrator | 2025-09-08 00:56:36 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:36.377149 | orchestrator | 2025-09-08 00:56:36 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:36.378187 | orchestrator | 2025-09-08 00:56:36 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:36.378209 | orchestrator | 2025-09-08 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:39.417063 | orchestrator | 2025-09-08 00:56:39 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:39.417171 | orchestrator | 2025-09-08 00:56:39 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:39.420769 | orchestrator | 2025-09-08 00:56:39 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:39.420795 | orchestrator | 2025-09-08 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:42.468002 | orchestrator | 2025-09-08 00:56:42 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:42.473174 | orchestrator | 2025-09-08 00:56:42 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:42.475336 | orchestrator | 2025-09-08 00:56:42 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:42.475372 | orchestrator | 2025-09-08 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:45.514637 | orchestrator | 2025-09-08 00:56:45 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:45.516304 | orchestrator | 2025-09-08 00:56:45 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:45.517628 | orchestrator | 2025-09-08 00:56:45 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:45.518116 | orchestrator | 2025-09-08 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:48.561208 | orchestrator | 2025-09-08 00:56:48 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:48.563126 | orchestrator | 2025-09-08 00:56:48 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:48.565618 | orchestrator | 2025-09-08 00:56:48 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:48.565654 | orchestrator | 2025-09-08 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:51.611235 | orchestrator | 2025-09-08 00:56:51 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:51.612789 | orchestrator | 2025-09-08 00:56:51 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:51.614133 | orchestrator | 2025-09-08 00:56:51 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:51.614162 | orchestrator | 2025-09-08 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:54.665390 | orchestrator | 2025-09-08 00:56:54 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:54.666376 | orchestrator | 2025-09-08 00:56:54 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:54.666758 | orchestrator | 2025-09-08 00:56:54 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:54.666781 | orchestrator | 2025-09-08 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:56:57.707909 | orchestrator | 2025-09-08 00:56:57 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:56:57.709383 | orchestrator | 2025-09-08 00:56:57 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:56:57.710754 | orchestrator | 2025-09-08 00:56:57 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:56:57.711021 | orchestrator | 2025-09-08 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:00.755187 | orchestrator | 2025-09-08 00:57:00 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:57:00.758931 | orchestrator | 2025-09-08 00:57:00 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:00.760613 | orchestrator | 2025-09-08 00:57:00 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:00.760641 | orchestrator | 2025-09-08 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:03.808075 | orchestrator | 2025-09-08 00:57:03 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:57:03.813241 | orchestrator | 2025-09-08 00:57:03 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:03.815418 | orchestrator | 2025-09-08 00:57:03 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:03.815494 | orchestrator | 2025-09-08 00:57:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:06.866006 | orchestrator | 2025-09-08 00:57:06 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:57:06.867492 | orchestrator | 2025-09-08 00:57:06 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:06.868930 | orchestrator | 2025-09-08 00:57:06 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:06.869491 | orchestrator | 2025-09-08 00:57:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:09.914313 | orchestrator | 2025-09-08 00:57:09 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:57:09.915042 | orchestrator | 2025-09-08 00:57:09 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:09.916855 | orchestrator | 2025-09-08 00:57:09 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:09.916894 | orchestrator | 2025-09-08 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:12.962009 | orchestrator | 2025-09-08 00:57:12 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state STARTED 2025-09-08 00:57:12.963027 | orchestrator | 2025-09-08 00:57:12 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:12.964086 | orchestrator | 2025-09-08 00:57:12 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:12.964323 | orchestrator | 2025-09-08 00:57:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:16.016892 | orchestrator | 2025-09-08 00:57:16 | INFO  | Task a587bc57-a54b-42b2-be2c-01cccfe4dcca is in state SUCCESS 2025-09-08 00:57:16.018126 | orchestrator | 2025-09-08 00:57:16.018170 | orchestrator | 2025-09-08 00:57:16.018183 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-08 00:57:16.018196 | orchestrator | 2025-09-08 00:57:16.018207 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-08 00:57:16.018218 | orchestrator | Monday 08 September 2025 00:55:02 +0000 (0:00:00.600) 0:00:00.600 ****** 2025-09-08 00:57:16.018251 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:57:16.018264 | orchestrator | 2025-09-08 00:57:16.018275 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-08 00:57:16.018286 | orchestrator | Monday 08 September 2025 00:55:03 +0000 (0:00:00.639) 0:00:01.240 ****** 2025-09-08 00:57:16.018328 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.018341 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.018352 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.018363 | orchestrator | 2025-09-08 00:57:16.018374 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-08 00:57:16.018384 | orchestrator | Monday 08 September 2025 00:55:03 +0000 (0:00:00.640) 0:00:01.881 ****** 2025-09-08 00:57:16.018395 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.018406 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.018416 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.018469 | orchestrator | 2025-09-08 00:57:16.018481 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-08 00:57:16.018491 | orchestrator | Monday 08 September 2025 00:55:03 +0000 (0:00:00.303) 0:00:02.184 ****** 2025-09-08 00:57:16.018502 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.018513 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.018523 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.018534 | orchestrator | 2025-09-08 00:57:16.018545 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-08 00:57:16.018555 | orchestrator | Monday 08 September 2025 00:55:04 +0000 (0:00:00.834) 0:00:03.019 ****** 2025-09-08 00:57:16.018566 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.018577 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.018587 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.018600 | orchestrator | 2025-09-08 00:57:16.018613 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-08 00:57:16.018626 | orchestrator | Monday 08 September 2025 00:55:05 +0000 (0:00:00.333) 0:00:03.352 ****** 2025-09-08 00:57:16.018639 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.018652 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.018665 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.018678 | orchestrator | 2025-09-08 00:57:16.018690 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-08 00:57:16.018704 | orchestrator | Monday 08 September 2025 00:55:05 +0000 (0:00:00.325) 0:00:03.678 ****** 2025-09-08 00:57:16.018717 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.018731 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.018743 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.018757 | orchestrator | 2025-09-08 00:57:16.018769 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-08 00:57:16.018783 | orchestrator | Monday 08 September 2025 00:55:05 +0000 (0:00:00.344) 0:00:04.023 ****** 2025-09-08 00:57:16.018796 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.018810 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.018823 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.018837 | orchestrator | 2025-09-08 00:57:16.018850 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-08 00:57:16.018863 | orchestrator | Monday 08 September 2025 00:55:06 +0000 (0:00:00.538) 0:00:04.561 ****** 2025-09-08 00:57:16.018875 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.018888 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.018900 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.018913 | orchestrator | 2025-09-08 00:57:16.018925 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-08 00:57:16.018940 | orchestrator | Monday 08 September 2025 00:55:06 +0000 (0:00:00.300) 0:00:04.861 ****** 2025-09-08 00:57:16.018953 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:16.018964 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:16.018975 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:16.018986 | orchestrator | 2025-09-08 00:57:16.018996 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-08 00:57:16.019007 | orchestrator | Monday 08 September 2025 00:55:07 +0000 (0:00:00.642) 0:00:05.504 ****** 2025-09-08 00:57:16.019026 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.019037 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.019048 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.019059 | orchestrator | 2025-09-08 00:57:16.019069 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-08 00:57:16.019080 | orchestrator | Monday 08 September 2025 00:55:07 +0000 (0:00:00.432) 0:00:05.937 ****** 2025-09-08 00:57:16.019091 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:16.019102 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:16.019112 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:16.019123 | orchestrator | 2025-09-08 00:57:16.019134 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-08 00:57:16.019145 | orchestrator | Monday 08 September 2025 00:55:09 +0000 (0:00:02.208) 0:00:08.146 ****** 2025-09-08 00:57:16.019156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:57:16.019167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:57:16.019178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:57:16.019188 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019199 | orchestrator | 2025-09-08 00:57:16.019210 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-08 00:57:16.019234 | orchestrator | Monday 08 September 2025 00:55:10 +0000 (0:00:00.414) 0:00:08.560 ****** 2025-09-08 00:57:16.019253 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.019268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.019279 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.019290 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019301 | orchestrator | 2025-09-08 00:57:16.019312 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-08 00:57:16.019322 | orchestrator | Monday 08 September 2025 00:55:11 +0000 (0:00:00.825) 0:00:09.385 ****** 2025-09-08 00:57:16.019336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.019349 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.019360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.019379 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019390 | orchestrator | 2025-09-08 00:57:16.019401 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-08 00:57:16.019411 | orchestrator | Monday 08 September 2025 00:55:11 +0000 (0:00:00.148) 0:00:09.534 ****** 2025-09-08 00:57:16.019440 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fc9731a8c08f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-08 00:55:08.423873', 'end': '2025-09-08 00:55:08.467675', 'delta': '0:00:00.043802', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc9731a8c08f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-08 00:57:16.019456 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fcca6e515c1a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-08 00:55:09.200620', 'end': '2025-09-08 00:55:09.238576', 'delta': '0:00:00.037956', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fcca6e515c1a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-08 00:57:16.019481 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '90e1cbb8e0aa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-08 00:55:09.766580', 'end': '2025-09-08 00:55:09.810331', 'delta': '0:00:00.043751', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90e1cbb8e0aa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-08 00:57:16.019493 | orchestrator | 2025-09-08 00:57:16.019504 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-08 00:57:16.019515 | orchestrator | Monday 08 September 2025 00:55:11 +0000 (0:00:00.384) 0:00:09.918 ****** 2025-09-08 00:57:16.019525 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.019536 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.019547 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.019558 | orchestrator | 2025-09-08 00:57:16.019568 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-08 00:57:16.019579 | orchestrator | Monday 08 September 2025 00:55:12 +0000 (0:00:00.449) 0:00:10.368 ****** 2025-09-08 00:57:16.019590 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-08 00:57:16.019601 | orchestrator | 2025-09-08 00:57:16.019611 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-08 00:57:16.019622 | orchestrator | Monday 08 September 2025 00:55:14 +0000 (0:00:02.494) 0:00:12.862 ****** 2025-09-08 00:57:16.019633 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019643 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.019654 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.019665 | orchestrator | 2025-09-08 00:57:16.019675 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-08 00:57:16.019686 | orchestrator | Monday 08 September 2025 00:55:14 +0000 (0:00:00.304) 0:00:13.167 ****** 2025-09-08 00:57:16.019704 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019715 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.019726 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.019736 | orchestrator | 2025-09-08 00:57:16.019747 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:57:16.019758 | orchestrator | Monday 08 September 2025 00:55:15 +0000 (0:00:00.432) 0:00:13.600 ****** 2025-09-08 00:57:16.019768 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019779 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.019790 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.019800 | orchestrator | 2025-09-08 00:57:16.019811 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-08 00:57:16.019822 | orchestrator | Monday 08 September 2025 00:55:15 +0000 (0:00:00.517) 0:00:14.118 ****** 2025-09-08 00:57:16.019832 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.019843 | orchestrator | 2025-09-08 00:57:16.019853 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-08 00:57:16.019864 | orchestrator | Monday 08 September 2025 00:55:16 +0000 (0:00:00.134) 0:00:14.254 ****** 2025-09-08 00:57:16.019875 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019886 | orchestrator | 2025-09-08 00:57:16.019896 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-08 00:57:16.019907 | orchestrator | Monday 08 September 2025 00:55:16 +0000 (0:00:00.248) 0:00:14.502 ****** 2025-09-08 00:57:16.019917 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019928 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.019939 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.019949 | orchestrator | 2025-09-08 00:57:16.019960 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-08 00:57:16.019971 | orchestrator | Monday 08 September 2025 00:55:16 +0000 (0:00:00.300) 0:00:14.803 ****** 2025-09-08 00:57:16.019981 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.019992 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.020003 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.020013 | orchestrator | 2025-09-08 00:57:16.020024 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-08 00:57:16.020035 | orchestrator | Monday 08 September 2025 00:55:16 +0000 (0:00:00.370) 0:00:15.173 ****** 2025-09-08 00:57:16.020045 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.020056 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.020067 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.020077 | orchestrator | 2025-09-08 00:57:16.020088 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-08 00:57:16.020099 | orchestrator | Monday 08 September 2025 00:55:17 +0000 (0:00:00.520) 0:00:15.694 ****** 2025-09-08 00:57:16.020109 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.020120 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.020130 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.020141 | orchestrator | 2025-09-08 00:57:16.020151 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-08 00:57:16.020162 | orchestrator | Monday 08 September 2025 00:55:17 +0000 (0:00:00.346) 0:00:16.040 ****** 2025-09-08 00:57:16.020173 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.020183 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.020194 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.020205 | orchestrator | 2025-09-08 00:57:16.020215 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-08 00:57:16.020226 | orchestrator | Monday 08 September 2025 00:55:18 +0000 (0:00:00.328) 0:00:16.368 ****** 2025-09-08 00:57:16.020237 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.020247 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.020258 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.020268 | orchestrator | 2025-09-08 00:57:16.020286 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-08 00:57:16.020301 | orchestrator | Monday 08 September 2025 00:55:18 +0000 (0:00:00.318) 0:00:16.687 ****** 2025-09-08 00:57:16.020313 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.020324 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.020334 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.020345 | orchestrator | 2025-09-08 00:57:16.020355 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-08 00:57:16.020371 | orchestrator | Monday 08 September 2025 00:55:19 +0000 (0:00:00.511) 0:00:17.199 ****** 2025-09-08 00:57:16.020383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58', 'dm-uuid-LVM-ybfRSmP8aGvHZUQPpShCMnW81sVOrSC9QwPPWmQXHuy8umSXHWxMosTwNB3imKdE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db', 'dm-uuid-LVM-DCDH7v4K4rkh5TDCYsRcjSlEn4Mtwf95aX2nE1oqSx8ElBUBJTUYi4w7is09qig5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a', 'dm-uuid-LVM-mxP93V13tGOgpkMOcBTuQfkcNJX2UjsZS2aaa8YDcnJLK5Igyth1WabbrmtHcWMT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zPbkUF-43iM-f14M-elPj-0f0f-rbpN-fue70D', 'scsi-0QEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb', 'scsi-SQEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3n0S09-IFC6-Nl3O-uLeF-6Jsb-WQZn-RBM2uq', 'scsi-0QEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5', 'scsi-SQEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743', 'dm-uuid-LVM-WEqtChBOdKGBjIu5Y01mhGfsmTnLrlNqdBufqJ9YSIa2K3maj7hXtXDOt1KJOSWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3', 'scsi-SQEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020734 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.020745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iXfWtL-RTU8-FkoO-Gbwb-oDS6-k7sB-9BfgEC', 'scsi-0QEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359', 'scsi-SQEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FKIkGg-R33Y-ICa0-ANyr-3sUG-8DEa-g2sTx2', 'scsi-0QEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98', 'scsi-SQEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2', 'dm-uuid-LVM-XTzZM3bLUDaXcirK3ZwflIcp3GvMOu5T6B1X5Wty47glqSemh8Y7qpfEJ745ZbCd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.020837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72', 'scsi-SQEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.020859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf', 'dm-uuid-LVM-1XvLKJmy5l0bje2V12wizBHeYh42P73FPgVNxtQcW1FD9Z3QWutNwTNHqe4kmMiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.021379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021403 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.021414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-08 00:57:16.021806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.021819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-H51Ski-1r8N-dM8l-fA8Q-Fhgd-JN65-QyZofI', 'scsi-0QEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e', 'scsi-SQEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.021830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRiLct-XwAy-PIGL-PiHo-1I52-cd9m-kP0Os0', 'scsi-0QEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d', 'scsi-SQEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.021847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962', 'scsi-SQEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.021863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-08 00:57:16.021873 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.021883 | orchestrator | 2025-09-08 00:57:16.021897 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-08 00:57:16.021907 | orchestrator | Monday 08 September 2025 00:55:19 +0000 (0:00:00.546) 0:00:17.745 ****** 2025-09-08 00:57:16.021918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58', 'dm-uuid-LVM-ybfRSmP8aGvHZUQPpShCMnW81sVOrSC9QwPPWmQXHuy8umSXHWxMosTwNB3imKdE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.021929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db', 'dm-uuid-LVM-DCDH7v4K4rkh5TDCYsRcjSlEn4Mtwf95aX2nE1oqSx8ElBUBJTUYi4w7is09qig5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.021940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.021956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.021966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.021988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.021999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a', 'dm-uuid-LVM-mxP93V13tGOgpkMOcBTuQfkcNJX2UjsZS2aaa8YDcnJLK5Igyth1WabbrmtHcWMT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_da17a974-2052-4ef5-933e-f04448611c0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743', 'dm-uuid-LVM-WEqtChBOdKGBjIu5Y01mhGfsmTnLrlNqdBufqJ9YSIa2K3maj7hXtXDOt1KJOSWd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6245231a--5e27--588f--a545--a88193777b58-osd--block--6245231a--5e27--588f--a545--a88193777b58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zPbkUF-43iM-f14M-elPj-0f0f-rbpN-fue70D', 'scsi-0QEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb', 'scsi-SQEMU_QEMU_HARDDISK_4631f46e-eb61-4253-8eaf-0e479598f4cb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022154 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022174 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7231c7d5--5dfe--5215--9efd--b7a5c24f93db-osd--block--7231c7d5--5dfe--5215--9efd--b7a5c24f93db'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3n0S09-IFC6-Nl3O-uLeF-6Jsb-WQZn-RBM2uq', 'scsi-0QEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5', 'scsi-SQEMU_QEMU_HARDDISK_71c81d38-851a-45a9-affe-242d84188eb5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3', 'scsi-SQEMU_QEMU_HARDDISK_93f20ee1-aa44-492e-8fd6-2ddde0eec0c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022273 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.022283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022357 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16', 'scsi-SQEMU_QEMU_HARDDISK_5172d866-a36c-423d-97d0-17dd15bbbbb9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a-osd--block--39881e3d--2712--5fd1--9b8f--3e1ed3474a2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iXfWtL-RTU8-FkoO-Gbwb-oDS6-k7sB-9BfgEC', 'scsi-0QEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359', 'scsi-SQEMU_QEMU_HARDDISK_bdc2c250-49e1-41fe-b0ad-7dd2c4789359'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e84ec590--0593--5433--8536--9c5125166743-osd--block--e84ec590--0593--5433--8536--9c5125166743'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FKIkGg-R33Y-ICa0-ANyr-3sUG-8DEa-g2sTx2', 'scsi-0QEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98', 'scsi-SQEMU_QEMU_HARDDISK_d104b958-607f-4535-a6c3-7c5e10e43f98'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022402 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2', 'dm-uuid-LVM-XTzZM3bLUDaXcirK3ZwflIcp3GvMOu5T6B1X5Wty47glqSemh8Y7qpfEJ745ZbCd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72', 'scsi-SQEMU_QEMU_HARDDISK_0ed32d85-e4d7-46a8-b481-7cb7d466dd72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf', 'dm-uuid-LVM-1XvLKJmy5l0bje2V12wizBHeYh42P73FPgVNxtQcW1FD9Z3QWutNwTNHqe4kmMiZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022502 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.022514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022573 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16', 'scsi-SQEMU_QEMU_HARDDISK_5283f4eb-967a-45cb-9108-62eab8899a44-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2-osd--block--8709f3ee--6295--5c1a--8e33--a410dc9aa8e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-H51Ski-1r8N-dM8l-fA8Q-Fhgd-JN65-QyZofI', 'scsi-0QEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e', 'scsi-SQEMU_QEMU_HARDDISK_b6d83665-6669-4f1a-a01e-1cb1a99e815e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf-osd--block--2f5f4832--0bc1--5ef5--ba0d--5b3759bf17bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lRiLct-XwAy-PIGL-PiHo-1I52-cd9m-kP0Os0', 'scsi-0QEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d', 'scsi-SQEMU_QEMU_HARDDISK_8ee7eb97-103b-48c1-b599-577d77aa5f2d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962', 'scsi-SQEMU_QEMU_HARDDISK_f2189477-3d04-4590-9bb4-080bdc335962'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-08-00-02-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-08 00:57:16.022705 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.022715 | orchestrator | 2025-09-08 00:57:16.022724 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-08 00:57:16.022734 | orchestrator | Monday 08 September 2025 00:55:20 +0000 (0:00:00.606) 0:00:18.352 ****** 2025-09-08 00:57:16.022744 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.022754 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.022763 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.022773 | orchestrator | 2025-09-08 00:57:16.022782 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-08 00:57:16.022798 | orchestrator | Monday 08 September 2025 00:55:20 +0000 (0:00:00.751) 0:00:19.104 ****** 2025-09-08 00:57:16.022807 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.022817 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.022826 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.022836 | orchestrator | 2025-09-08 00:57:16.022845 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:57:16.022855 | orchestrator | Monday 08 September 2025 00:55:21 +0000 (0:00:00.510) 0:00:19.614 ****** 2025-09-08 00:57:16.022864 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.022874 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.022883 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.022893 | orchestrator | 2025-09-08 00:57:16.022902 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:57:16.022912 | orchestrator | Monday 08 September 2025 00:55:22 +0000 (0:00:00.674) 0:00:20.289 ****** 2025-09-08 00:57:16.022922 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.022931 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.022941 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.022950 | orchestrator | 2025-09-08 00:57:16.022960 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-08 00:57:16.022969 | orchestrator | Monday 08 September 2025 00:55:22 +0000 (0:00:00.307) 0:00:20.596 ****** 2025-09-08 00:57:16.022979 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.022988 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.022998 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.023007 | orchestrator | 2025-09-08 00:57:16.023017 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-08 00:57:16.023026 | orchestrator | Monday 08 September 2025 00:55:22 +0000 (0:00:00.417) 0:00:21.014 ****** 2025-09-08 00:57:16.023036 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023045 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.023055 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.023064 | orchestrator | 2025-09-08 00:57:16.023074 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-08 00:57:16.023083 | orchestrator | Monday 08 September 2025 00:55:23 +0000 (0:00:00.502) 0:00:21.517 ****** 2025-09-08 00:57:16.023093 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-08 00:57:16.023103 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-08 00:57:16.023112 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-08 00:57:16.023122 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-08 00:57:16.023131 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-08 00:57:16.023140 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-08 00:57:16.023150 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-08 00:57:16.023159 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-08 00:57:16.023169 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-08 00:57:16.023178 | orchestrator | 2025-09-08 00:57:16.023188 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-08 00:57:16.023197 | orchestrator | Monday 08 September 2025 00:55:24 +0000 (0:00:00.875) 0:00:22.392 ****** 2025-09-08 00:57:16.023207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-08 00:57:16.023217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-08 00:57:16.023226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-08 00:57:16.023236 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023245 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-08 00:57:16.023255 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-08 00:57:16.023264 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-08 00:57:16.023274 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.023283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-08 00:57:16.023299 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-08 00:57:16.023309 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-08 00:57:16.023318 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.023328 | orchestrator | 2025-09-08 00:57:16.023337 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-08 00:57:16.023347 | orchestrator | Monday 08 September 2025 00:55:24 +0000 (0:00:00.358) 0:00:22.751 ****** 2025-09-08 00:57:16.023357 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 00:57:16.023366 | orchestrator | 2025-09-08 00:57:16.023376 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-08 00:57:16.023386 | orchestrator | Monday 08 September 2025 00:55:25 +0000 (0:00:00.720) 0:00:23.471 ****** 2025-09-08 00:57:16.023396 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023406 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.023415 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.023441 | orchestrator | 2025-09-08 00:57:16.023456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-08 00:57:16.023466 | orchestrator | Monday 08 September 2025 00:55:25 +0000 (0:00:00.320) 0:00:23.791 ****** 2025-09-08 00:57:16.023475 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023485 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.023500 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.023510 | orchestrator | 2025-09-08 00:57:16.023519 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-08 00:57:16.023529 | orchestrator | Monday 08 September 2025 00:55:25 +0000 (0:00:00.310) 0:00:24.102 ****** 2025-09-08 00:57:16.023538 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023548 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.023557 | orchestrator | skipping: [testbed-node-5] 2025-09-08 00:57:16.023566 | orchestrator | 2025-09-08 00:57:16.023576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-08 00:57:16.023585 | orchestrator | Monday 08 September 2025 00:55:26 +0000 (0:00:00.326) 0:00:24.429 ****** 2025-09-08 00:57:16.023595 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.023604 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.023614 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.023623 | orchestrator | 2025-09-08 00:57:16.023633 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-08 00:57:16.023642 | orchestrator | Monday 08 September 2025 00:55:26 +0000 (0:00:00.604) 0:00:25.034 ****** 2025-09-08 00:57:16.023651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:57:16.023661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:57:16.023670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:57:16.023680 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023689 | orchestrator | 2025-09-08 00:57:16.023699 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-08 00:57:16.023708 | orchestrator | Monday 08 September 2025 00:55:27 +0000 (0:00:00.376) 0:00:25.410 ****** 2025-09-08 00:57:16.023717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:57:16.023727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:57:16.023736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:57:16.023746 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023755 | orchestrator | 2025-09-08 00:57:16.023765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-08 00:57:16.023774 | orchestrator | Monday 08 September 2025 00:55:27 +0000 (0:00:00.366) 0:00:25.776 ****** 2025-09-08 00:57:16.023784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-08 00:57:16.023803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-08 00:57:16.023813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-08 00:57:16.023822 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.023831 | orchestrator | 2025-09-08 00:57:16.023841 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-08 00:57:16.023850 | orchestrator | Monday 08 September 2025 00:55:27 +0000 (0:00:00.397) 0:00:26.174 ****** 2025-09-08 00:57:16.023860 | orchestrator | ok: [testbed-node-3] 2025-09-08 00:57:16.023870 | orchestrator | ok: [testbed-node-4] 2025-09-08 00:57:16.023879 | orchestrator | ok: [testbed-node-5] 2025-09-08 00:57:16.023888 | orchestrator | 2025-09-08 00:57:16.023898 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-08 00:57:16.023908 | orchestrator | Monday 08 September 2025 00:55:28 +0000 (0:00:00.373) 0:00:26.547 ****** 2025-09-08 00:57:16.023917 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-08 00:57:16.023927 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-08 00:57:16.023936 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-08 00:57:16.023946 | orchestrator | 2025-09-08 00:57:16.023955 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-08 00:57:16.023965 | orchestrator | Monday 08 September 2025 00:55:28 +0000 (0:00:00.527) 0:00:27.074 ****** 2025-09-08 00:57:16.023974 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:16.023984 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:16.023993 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:16.024003 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-08 00:57:16.024012 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:57:16.024022 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:57:16.024031 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:57:16.024041 | orchestrator | 2025-09-08 00:57:16.024050 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-08 00:57:16.024059 | orchestrator | Monday 08 September 2025 00:55:29 +0000 (0:00:01.009) 0:00:28.084 ****** 2025-09-08 00:57:16.024069 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-08 00:57:16.024078 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-08 00:57:16.024088 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-08 00:57:16.024097 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-08 00:57:16.024107 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-08 00:57:16.024116 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-08 00:57:16.024126 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-08 00:57:16.024135 | orchestrator | 2025-09-08 00:57:16.024149 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-08 00:57:16.024159 | orchestrator | Monday 08 September 2025 00:55:31 +0000 (0:00:02.030) 0:00:30.115 ****** 2025-09-08 00:57:16.024168 | orchestrator | skipping: [testbed-node-3] 2025-09-08 00:57:16.024178 | orchestrator | skipping: [testbed-node-4] 2025-09-08 00:57:16.024192 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-08 00:57:16.024202 | orchestrator | 2025-09-08 00:57:16.024211 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-08 00:57:16.024221 | orchestrator | Monday 08 September 2025 00:55:32 +0000 (0:00:00.379) 0:00:30.494 ****** 2025-09-08 00:57:16.024231 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:16.024247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:16.024257 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:16.024267 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:16.024277 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-08 00:57:16.024287 | orchestrator | 2025-09-08 00:57:16.024296 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-08 00:57:16.024306 | orchestrator | Monday 08 September 2025 00:56:18 +0000 (0:00:46.076) 0:01:16.570 ****** 2025-09-08 00:57:16.024315 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024325 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024334 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024344 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024353 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024363 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024372 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-08 00:57:16.024382 | orchestrator | 2025-09-08 00:57:16.024391 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-08 00:57:16.024401 | orchestrator | Monday 08 September 2025 00:56:42 +0000 (0:00:24.311) 0:01:40.881 ****** 2025-09-08 00:57:16.024410 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024420 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024481 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024491 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024501 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024511 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024522 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-08 00:57:16.024532 | orchestrator | 2025-09-08 00:57:16.024542 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-08 00:57:16.024552 | orchestrator | Monday 08 September 2025 00:56:54 +0000 (0:00:12.313) 0:01:53.195 ****** 2025-09-08 00:57:16.024562 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024573 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:16.024583 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:16.024599 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024609 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:16.024620 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:16.024637 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024647 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:16.024658 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:16.024694 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024704 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:16.024714 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:16.024725 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024735 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:16.024745 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:16.024756 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-08 00:57:16.024766 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-08 00:57:16.024776 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-08 00:57:16.024787 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-08 00:57:16.024797 | orchestrator | 2025-09-08 00:57:16.024808 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:57:16.024818 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-08 00:57:16.024831 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-08 00:57:16.024842 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-08 00:57:16.024853 | orchestrator | 2025-09-08 00:57:16.024863 | orchestrator | 2025-09-08 00:57:16.024873 | orchestrator | 2025-09-08 00:57:16.024884 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:57:16.024894 | orchestrator | Monday 08 September 2025 00:57:13 +0000 (0:00:18.120) 0:02:11.316 ****** 2025-09-08 00:57:16.024904 | orchestrator | =============================================================================== 2025-09-08 00:57:16.024915 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.08s 2025-09-08 00:57:16.024925 | orchestrator | generate keys ---------------------------------------------------------- 24.31s 2025-09-08 00:57:16.024936 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.12s 2025-09-08 00:57:16.024946 | orchestrator | get keys from monitors ------------------------------------------------- 12.31s 2025-09-08 00:57:16.024956 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.49s 2025-09-08 00:57:16.024966 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.21s 2025-09-08 00:57:16.024977 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.03s 2025-09-08 00:57:16.024987 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2025-09-08 00:57:16.024996 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2025-09-08 00:57:16.025004 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2025-09-08 00:57:16.025013 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2025-09-08 00:57:16.025027 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2025-09-08 00:57:16.025035 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2025-09-08 00:57:16.025044 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2025-09-08 00:57:16.025052 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2025-09-08 00:57:16.025061 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-09-08 00:57:16.025069 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2025-09-08 00:57:16.025078 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.61s 2025-09-08 00:57:16.025086 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.60s 2025-09-08 00:57:16.025094 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.55s 2025-09-08 00:57:16.025103 | orchestrator | 2025-09-08 00:57:16 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:16.025112 | orchestrator | 2025-09-08 00:57:16 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:16.025120 | orchestrator | 2025-09-08 00:57:16 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:16.025129 | orchestrator | 2025-09-08 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:19.073416 | orchestrator | 2025-09-08 00:57:19 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:19.074645 | orchestrator | 2025-09-08 00:57:19 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:19.076203 | orchestrator | 2025-09-08 00:57:19 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:19.076228 | orchestrator | 2025-09-08 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:22.126346 | orchestrator | 2025-09-08 00:57:22 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:22.128395 | orchestrator | 2025-09-08 00:57:22 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:22.129642 | orchestrator | 2025-09-08 00:57:22 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:22.129662 | orchestrator | 2025-09-08 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:25.175290 | orchestrator | 2025-09-08 00:57:25 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:25.177163 | orchestrator | 2025-09-08 00:57:25 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:25.181486 | orchestrator | 2025-09-08 00:57:25 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:25.181867 | orchestrator | 2025-09-08 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:28.235888 | orchestrator | 2025-09-08 00:57:28 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:28.238773 | orchestrator | 2025-09-08 00:57:28 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:28.240750 | orchestrator | 2025-09-08 00:57:28 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:28.240786 | orchestrator | 2025-09-08 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:31.291867 | orchestrator | 2025-09-08 00:57:31 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:31.292546 | orchestrator | 2025-09-08 00:57:31 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:31.297078 | orchestrator | 2025-09-08 00:57:31 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:31.297109 | orchestrator | 2025-09-08 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:34.341069 | orchestrator | 2025-09-08 00:57:34 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:34.342667 | orchestrator | 2025-09-08 00:57:34 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:34.344178 | orchestrator | 2025-09-08 00:57:34 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:34.344214 | orchestrator | 2025-09-08 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:37.389400 | orchestrator | 2025-09-08 00:57:37 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:37.391261 | orchestrator | 2025-09-08 00:57:37 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:37.393228 | orchestrator | 2025-09-08 00:57:37 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:37.393253 | orchestrator | 2025-09-08 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:40.443957 | orchestrator | 2025-09-08 00:57:40 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:40.445678 | orchestrator | 2025-09-08 00:57:40 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:40.448381 | orchestrator | 2025-09-08 00:57:40 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:40.448484 | orchestrator | 2025-09-08 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:43.502687 | orchestrator | 2025-09-08 00:57:43 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:43.505238 | orchestrator | 2025-09-08 00:57:43 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state STARTED 2025-09-08 00:57:43.506777 | orchestrator | 2025-09-08 00:57:43 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:43.506809 | orchestrator | 2025-09-08 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:46.553079 | orchestrator | 2025-09-08 00:57:46 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:57:46.554980 | orchestrator | 2025-09-08 00:57:46 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:46.556849 | orchestrator | 2025-09-08 00:57:46 | INFO  | Task 03a35dcd-24f9-4d08-afb8-cbaed08ef53a is in state SUCCESS 2025-09-08 00:57:46.559306 | orchestrator | 2025-09-08 00:57:46 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:46.559619 | orchestrator | 2025-09-08 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:49.603502 | orchestrator | 2025-09-08 00:57:49 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:57:49.605854 | orchestrator | 2025-09-08 00:57:49 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:49.607483 | orchestrator | 2025-09-08 00:57:49 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:49.607518 | orchestrator | 2025-09-08 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:52.646595 | orchestrator | 2025-09-08 00:57:52 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:57:52.649973 | orchestrator | 2025-09-08 00:57:52 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:52.652047 | orchestrator | 2025-09-08 00:57:52 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:52.652077 | orchestrator | 2025-09-08 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:55.695873 | orchestrator | 2025-09-08 00:57:55 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:57:55.697973 | orchestrator | 2025-09-08 00:57:55 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:55.699378 | orchestrator | 2025-09-08 00:57:55 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:55.699405 | orchestrator | 2025-09-08 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:57:58.740373 | orchestrator | 2025-09-08 00:57:58 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:57:58.741823 | orchestrator | 2025-09-08 00:57:58 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:57:58.743640 | orchestrator | 2025-09-08 00:57:58 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:57:58.744037 | orchestrator | 2025-09-08 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:01.780345 | orchestrator | 2025-09-08 00:58:01 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:01.782587 | orchestrator | 2025-09-08 00:58:01 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:58:01.783369 | orchestrator | 2025-09-08 00:58:01 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:01.783388 | orchestrator | 2025-09-08 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:04.837982 | orchestrator | 2025-09-08 00:58:04 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:04.841777 | orchestrator | 2025-09-08 00:58:04 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state STARTED 2025-09-08 00:58:04.844735 | orchestrator | 2025-09-08 00:58:04 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:04.844862 | orchestrator | 2025-09-08 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:07.883238 | orchestrator | 2025-09-08 00:58:07 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:07.886283 | orchestrator | 2025-09-08 00:58:07 | INFO  | Task 4ccdcce7-f4ac-4292-8c7c-9f08141d4e96 is in state SUCCESS 2025-09-08 00:58:07.888332 | orchestrator | 2025-09-08 00:58:07.888370 | orchestrator | 2025-09-08 00:58:07.888382 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-08 00:58:07.888395 | orchestrator | 2025-09-08 00:58:07.888406 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-08 00:58:07.888448 | orchestrator | Monday 08 September 2025 00:57:17 +0000 (0:00:00.162) 0:00:00.162 ****** 2025-09-08 00:58:07.888460 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-08 00:58:07.888473 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888485 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888496 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 00:58:07.888508 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888548 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-08 00:58:07.888560 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-08 00:58:07.888590 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-08 00:58:07.888602 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-08 00:58:07.888613 | orchestrator | 2025-09-08 00:58:07.888624 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-08 00:58:07.888635 | orchestrator | Monday 08 September 2025 00:57:21 +0000 (0:00:04.244) 0:00:04.407 ****** 2025-09-08 00:58:07.888647 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-08 00:58:07.888659 | orchestrator | 2025-09-08 00:58:07.888671 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-08 00:58:07.888682 | orchestrator | Monday 08 September 2025 00:57:22 +0000 (0:00:00.982) 0:00:05.389 ****** 2025-09-08 00:58:07.888693 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-08 00:58:07.888704 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888715 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888727 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 00:58:07.888738 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888749 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-08 00:58:07.888760 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-08 00:58:07.888772 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-08 00:58:07.888783 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-08 00:58:07.888794 | orchestrator | 2025-09-08 00:58:07.888805 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-08 00:58:07.888816 | orchestrator | Monday 08 September 2025 00:57:36 +0000 (0:00:13.540) 0:00:18.929 ****** 2025-09-08 00:58:07.888828 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-08 00:58:07.888839 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888850 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888861 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 00:58:07.888872 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-08 00:58:07.888884 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-08 00:58:07.888897 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-08 00:58:07.888909 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-08 00:58:07.888923 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-08 00:58:07.888936 | orchestrator | 2025-09-08 00:58:07.888950 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:58:07.888964 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:58:07.888979 | orchestrator | 2025-09-08 00:58:07.888992 | orchestrator | 2025-09-08 00:58:07.889006 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:58:07.889018 | orchestrator | Monday 08 September 2025 00:57:43 +0000 (0:00:06.779) 0:00:25.709 ****** 2025-09-08 00:58:07.889031 | orchestrator | =============================================================================== 2025-09-08 00:58:07.889044 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.54s 2025-09-08 00:58:07.889066 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.78s 2025-09-08 00:58:07.889080 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.24s 2025-09-08 00:58:07.889093 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2025-09-08 00:58:07.889106 | orchestrator | 2025-09-08 00:58:07.889120 | orchestrator | 2025-09-08 00:58:07.889134 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:58:07.889147 | orchestrator | 2025-09-08 00:58:07.889171 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:58:07.889185 | orchestrator | Monday 08 September 2025 00:56:14 +0000 (0:00:00.264) 0:00:00.264 ****** 2025-09-08 00:58:07.889198 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.889212 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.889225 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.889238 | orchestrator | 2025-09-08 00:58:07.889250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:58:07.889261 | orchestrator | Monday 08 September 2025 00:56:14 +0000 (0:00:00.327) 0:00:00.591 ****** 2025-09-08 00:58:07.889272 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-08 00:58:07.889284 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-08 00:58:07.889295 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-08 00:58:07.889305 | orchestrator | 2025-09-08 00:58:07.889316 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-08 00:58:07.889327 | orchestrator | 2025-09-08 00:58:07.889339 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:07.889350 | orchestrator | Monday 08 September 2025 00:56:15 +0000 (0:00:00.463) 0:00:01.055 ****** 2025-09-08 00:58:07.889361 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:58:07.889371 | orchestrator | 2025-09-08 00:58:07.889388 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-08 00:58:07.889399 | orchestrator | Monday 08 September 2025 00:56:15 +0000 (0:00:00.450) 0:00:01.506 ****** 2025-09-08 00:58:07.889445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.889490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.889505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.889525 | orchestrator | 2025-09-08 00:58:07.889536 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-08 00:58:07.889548 | orchestrator | Monday 08 September 2025 00:56:16 +0000 (0:00:01.043) 0:00:02.549 ****** 2025-09-08 00:58:07.889559 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.889570 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.889581 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.889592 | orchestrator | 2025-09-08 00:58:07.889603 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:07.889614 | orchestrator | Monday 08 September 2025 00:56:17 +0000 (0:00:00.351) 0:00:02.900 ****** 2025-09-08 00:58:07.889625 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-08 00:58:07.889642 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-08 00:58:07.889653 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-08 00:58:07.889664 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-08 00:58:07.889675 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-08 00:58:07.889686 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-08 00:58:07.889697 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-08 00:58:07.889708 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-08 00:58:07.889719 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-08 00:58:07.889730 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-08 00:58:07.889741 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-08 00:58:07.889752 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-08 00:58:07.889763 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-08 00:58:07.889779 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-08 00:58:07.889790 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-08 00:58:07.889801 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-08 00:58:07.889812 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-08 00:58:07.889823 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-08 00:58:07.889834 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-08 00:58:07.889845 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-08 00:58:07.889856 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-08 00:58:07.889868 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-08 00:58:07.889879 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-08 00:58:07.889889 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-08 00:58:07.889908 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-08 00:58:07.889922 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-08 00:58:07.889933 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-08 00:58:07.889945 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-08 00:58:07.889956 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-08 00:58:07.889967 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-08 00:58:07.889978 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-08 00:58:07.889989 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-08 00:58:07.890000 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-08 00:58:07.890011 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-08 00:58:07.890082 | orchestrator | 2025-09-08 00:58:07.890094 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.890105 | orchestrator | Monday 08 September 2025 00:56:17 +0000 (0:00:00.710) 0:00:03.611 ****** 2025-09-08 00:58:07.890116 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.890127 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.890137 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.890148 | orchestrator | 2025-09-08 00:58:07.890159 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.890170 | orchestrator | Monday 08 September 2025 00:56:18 +0000 (0:00:00.287) 0:00:03.898 ****** 2025-09-08 00:58:07.890181 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890192 | orchestrator | 2025-09-08 00:58:07.890210 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.890221 | orchestrator | Monday 08 September 2025 00:56:18 +0000 (0:00:00.130) 0:00:04.028 ****** 2025-09-08 00:58:07.890232 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890243 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.890254 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.890265 | orchestrator | 2025-09-08 00:58:07.890276 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.890287 | orchestrator | Monday 08 September 2025 00:56:18 +0000 (0:00:00.389) 0:00:04.417 ****** 2025-09-08 00:58:07.890298 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.890309 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.890320 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.890330 | orchestrator | 2025-09-08 00:58:07.890341 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.890352 | orchestrator | Monday 08 September 2025 00:56:19 +0000 (0:00:00.340) 0:00:04.758 ****** 2025-09-08 00:58:07.890363 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890374 | orchestrator | 2025-09-08 00:58:07.890385 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.890403 | orchestrator | Monday 08 September 2025 00:56:19 +0000 (0:00:00.144) 0:00:04.902 ****** 2025-09-08 00:58:07.890431 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890443 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.890454 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.890464 | orchestrator | 2025-09-08 00:58:07.890481 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.890492 | orchestrator | Monday 08 September 2025 00:56:19 +0000 (0:00:00.296) 0:00:05.199 ****** 2025-09-08 00:58:07.890503 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.890514 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.890524 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.890535 | orchestrator | 2025-09-08 00:58:07.890546 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.890557 | orchestrator | Monday 08 September 2025 00:56:19 +0000 (0:00:00.299) 0:00:05.498 ****** 2025-09-08 00:58:07.890567 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890578 | orchestrator | 2025-09-08 00:58:07.890589 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.890600 | orchestrator | Monday 08 September 2025 00:56:20 +0000 (0:00:00.355) 0:00:05.854 ****** 2025-09-08 00:58:07.890610 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890621 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.890632 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.890642 | orchestrator | 2025-09-08 00:58:07.890653 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.890663 | orchestrator | Monday 08 September 2025 00:56:20 +0000 (0:00:00.348) 0:00:06.203 ****** 2025-09-08 00:58:07.890674 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.890685 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.890696 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.890706 | orchestrator | 2025-09-08 00:58:07.890717 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.890728 | orchestrator | Monday 08 September 2025 00:56:20 +0000 (0:00:00.315) 0:00:06.518 ****** 2025-09-08 00:58:07.890739 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890749 | orchestrator | 2025-09-08 00:58:07.890760 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.890771 | orchestrator | Monday 08 September 2025 00:56:20 +0000 (0:00:00.135) 0:00:06.654 ****** 2025-09-08 00:58:07.890781 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890792 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.890803 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.890814 | orchestrator | 2025-09-08 00:58:07.890824 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.890835 | orchestrator | Monday 08 September 2025 00:56:21 +0000 (0:00:00.301) 0:00:06.955 ****** 2025-09-08 00:58:07.890846 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.890857 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.890867 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.890878 | orchestrator | 2025-09-08 00:58:07.890889 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.890899 | orchestrator | Monday 08 September 2025 00:56:21 +0000 (0:00:00.502) 0:00:07.458 ****** 2025-09-08 00:58:07.890910 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890921 | orchestrator | 2025-09-08 00:58:07.890931 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.890942 | orchestrator | Monday 08 September 2025 00:56:21 +0000 (0:00:00.152) 0:00:07.610 ****** 2025-09-08 00:58:07.890953 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.890964 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.890974 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.890985 | orchestrator | 2025-09-08 00:58:07.890995 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.891006 | orchestrator | Monday 08 September 2025 00:56:22 +0000 (0:00:00.313) 0:00:07.923 ****** 2025-09-08 00:58:07.891024 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.891035 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.891046 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.891056 | orchestrator | 2025-09-08 00:58:07.891067 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.891078 | orchestrator | Monday 08 September 2025 00:56:22 +0000 (0:00:00.327) 0:00:08.251 ****** 2025-09-08 00:58:07.891089 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891099 | orchestrator | 2025-09-08 00:58:07.891110 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.891121 | orchestrator | Monday 08 September 2025 00:56:22 +0000 (0:00:00.144) 0:00:08.396 ****** 2025-09-08 00:58:07.891132 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891142 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.891153 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.891164 | orchestrator | 2025-09-08 00:58:07.891174 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.891185 | orchestrator | Monday 08 September 2025 00:56:23 +0000 (0:00:00.481) 0:00:08.877 ****** 2025-09-08 00:58:07.891196 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.891212 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.891224 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.891235 | orchestrator | 2025-09-08 00:58:07.891246 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.891256 | orchestrator | Monday 08 September 2025 00:56:23 +0000 (0:00:00.321) 0:00:09.198 ****** 2025-09-08 00:58:07.891267 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891278 | orchestrator | 2025-09-08 00:58:07.891288 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.891299 | orchestrator | Monday 08 September 2025 00:56:23 +0000 (0:00:00.148) 0:00:09.347 ****** 2025-09-08 00:58:07.891310 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891321 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.891331 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.891342 | orchestrator | 2025-09-08 00:58:07.891353 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.891364 | orchestrator | Monday 08 September 2025 00:56:23 +0000 (0:00:00.299) 0:00:09.647 ****** 2025-09-08 00:58:07.891374 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.891385 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.891396 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.891406 | orchestrator | 2025-09-08 00:58:07.891469 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.891481 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:00.373) 0:00:10.020 ****** 2025-09-08 00:58:07.891492 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891502 | orchestrator | 2025-09-08 00:58:07.891518 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.891529 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:00.120) 0:00:10.141 ****** 2025-09-08 00:58:07.891540 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891551 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.891562 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.891572 | orchestrator | 2025-09-08 00:58:07.891583 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.891592 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:00.461) 0:00:10.602 ****** 2025-09-08 00:58:07.891602 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.891611 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.891621 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.891630 | orchestrator | 2025-09-08 00:58:07.891640 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.891649 | orchestrator | Monday 08 September 2025 00:56:25 +0000 (0:00:00.300) 0:00:10.902 ****** 2025-09-08 00:58:07.891666 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891676 | orchestrator | 2025-09-08 00:58:07.891685 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.891695 | orchestrator | Monday 08 September 2025 00:56:25 +0000 (0:00:00.150) 0:00:11.052 ****** 2025-09-08 00:58:07.891704 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891714 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.891723 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.891733 | orchestrator | 2025-09-08 00:58:07.891743 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-08 00:58:07.891752 | orchestrator | Monday 08 September 2025 00:56:25 +0000 (0:00:00.304) 0:00:11.356 ****** 2025-09-08 00:58:07.891762 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:58:07.891771 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:58:07.891781 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:58:07.891790 | orchestrator | 2025-09-08 00:58:07.891800 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-08 00:58:07.891810 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.508) 0:00:11.865 ****** 2025-09-08 00:58:07.891819 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891829 | orchestrator | 2025-09-08 00:58:07.891838 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-08 00:58:07.891848 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.136) 0:00:12.002 ****** 2025-09-08 00:58:07.891858 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.891867 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.891876 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.891886 | orchestrator | 2025-09-08 00:58:07.891895 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-08 00:58:07.891905 | orchestrator | Monday 08 September 2025 00:56:26 +0000 (0:00:00.306) 0:00:12.309 ****** 2025-09-08 00:58:07.891915 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:58:07.891924 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:07.891933 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:58:07.891943 | orchestrator | 2025-09-08 00:58:07.891953 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-08 00:58:07.891962 | orchestrator | Monday 08 September 2025 00:56:28 +0000 (0:00:01.642) 0:00:13.952 ****** 2025-09-08 00:58:07.891972 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-08 00:58:07.891981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-08 00:58:07.891991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-08 00:58:07.892000 | orchestrator | 2025-09-08 00:58:07.892010 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-08 00:58:07.892019 | orchestrator | Monday 08 September 2025 00:56:30 +0000 (0:00:02.043) 0:00:15.995 ****** 2025-09-08 00:58:07.892029 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-08 00:58:07.892039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-08 00:58:07.892048 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-08 00:58:07.892058 | orchestrator | 2025-09-08 00:58:07.892068 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-08 00:58:07.892082 | orchestrator | Monday 08 September 2025 00:56:32 +0000 (0:00:02.470) 0:00:18.466 ****** 2025-09-08 00:58:07.892092 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-08 00:58:07.892102 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-08 00:58:07.892111 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-08 00:58:07.892133 | orchestrator | 2025-09-08 00:58:07.892143 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-08 00:58:07.892152 | orchestrator | Monday 08 September 2025 00:56:34 +0000 (0:00:01.655) 0:00:20.121 ****** 2025-09-08 00:58:07.892162 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.892171 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.892181 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.892190 | orchestrator | 2025-09-08 00:58:07.892200 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-08 00:58:07.892209 | orchestrator | Monday 08 September 2025 00:56:34 +0000 (0:00:00.315) 0:00:20.437 ****** 2025-09-08 00:58:07.892219 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.892228 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.892238 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.892248 | orchestrator | 2025-09-08 00:58:07.892257 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:07.892272 | orchestrator | Monday 08 September 2025 00:56:35 +0000 (0:00:00.299) 0:00:20.737 ****** 2025-09-08 00:58:07.892282 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:58:07.892292 | orchestrator | 2025-09-08 00:58:07.892301 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-08 00:58:07.892311 | orchestrator | Monday 08 September 2025 00:56:35 +0000 (0:00:00.810) 0:00:21.548 ****** 2025-09-08 00:58:07.892322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.892348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.892366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.892383 | orchestrator | 2025-09-08 00:58:07.892392 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-08 00:58:07.892402 | orchestrator | Monday 08 September 2025 00:56:37 +0000 (0:00:01.533) 0:00:23.081 ****** 2025-09-08 00:58:07.892469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:07.892483 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.892499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:07.892521 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.892537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:07.892548 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.892558 | orchestrator | 2025-09-08 00:58:07.892567 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-08 00:58:07.892577 | orchestrator | Monday 08 September 2025 00:56:38 +0000 (0:00:00.658) 0:00:23.740 ****** 2025-09-08 00:58:07.892594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:07.892612 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.892628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:07.892638 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.892656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-08 00:58:07.892673 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.892683 | orchestrator | 2025-09-08 00:58:07.892698 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-08 00:58:07.892707 | orchestrator | Monday 08 September 2025 00:56:39 +0000 (0:00:01.255) 0:00:24.995 ****** 2025-09-08 00:58:07.892718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.892747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.892758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-08 00:58:07.892772 | orchestrator | 2025-09-08 00:58:07.892781 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:07.892788 | orchestrator | Monday 08 September 2025 00:56:40 +0000 (0:00:01.260) 0:00:26.256 ****** 2025-09-08 00:58:07.892796 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:58:07.892804 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:58:07.892812 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:58:07.892820 | orchestrator | 2025-09-08 00:58:07.892828 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-08 00:58:07.892836 | orchestrator | Monday 08 September 2025 00:56:40 +0000 (0:00:00.327) 0:00:26.583 ****** 2025-09-08 00:58:07.892848 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:58:07.892856 | orchestrator | 2025-09-08 00:58:07.892864 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-08 00:58:07.892872 | orchestrator | Monday 08 September 2025 00:56:41 +0000 (0:00:00.706) 0:00:27.290 ****** 2025-09-08 00:58:07.892880 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:07.892887 | orchestrator | 2025-09-08 00:58:07.892895 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-08 00:58:07.892903 | orchestrator | Monday 08 September 2025 00:56:43 +0000 (0:00:02.158) 0:00:29.449 ****** 2025-09-08 00:58:07.892911 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:07.892918 | orchestrator | 2025-09-08 00:58:07.892926 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-08 00:58:07.892934 | orchestrator | Monday 08 September 2025 00:56:45 +0000 (0:00:02.204) 0:00:31.653 ****** 2025-09-08 00:58:07.892942 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:07.892950 | orchestrator | 2025-09-08 00:58:07.892958 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-08 00:58:07.892965 | orchestrator | Monday 08 September 2025 00:57:01 +0000 (0:00:15.749) 0:00:47.403 ****** 2025-09-08 00:58:07.892973 | orchestrator | 2025-09-08 00:58:07.892981 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-08 00:58:07.892989 | orchestrator | Monday 08 September 2025 00:57:01 +0000 (0:00:00.068) 0:00:47.471 ****** 2025-09-08 00:58:07.892997 | orchestrator | 2025-09-08 00:58:07.893011 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-08 00:58:07.893019 | orchestrator | Monday 08 September 2025 00:57:01 +0000 (0:00:00.063) 0:00:47.535 ****** 2025-09-08 00:58:07.893027 | orchestrator | 2025-09-08 00:58:07.893035 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-08 00:58:07.893043 | orchestrator | Monday 08 September 2025 00:57:01 +0000 (0:00:00.075) 0:00:47.611 ****** 2025-09-08 00:58:07.893050 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:58:07.893058 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:58:07.893066 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:58:07.893074 | orchestrator | 2025-09-08 00:58:07.893082 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:58:07.893090 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-08 00:58:07.893098 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-08 00:58:07.893106 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-08 00:58:07.893119 | orchestrator | 2025-09-08 00:58:07.893127 | orchestrator | 2025-09-08 00:58:07.893135 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:58:07.893142 | orchestrator | Monday 08 September 2025 00:58:05 +0000 (0:01:03.121) 0:01:50.732 ****** 2025-09-08 00:58:07.893150 | orchestrator | =============================================================================== 2025-09-08 00:58:07.893158 | orchestrator | horizon : Restart horizon container ------------------------------------ 63.12s 2025-09-08 00:58:07.893166 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.75s 2025-09-08 00:58:07.893174 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.47s 2025-09-08 00:58:07.893182 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.20s 2025-09-08 00:58:07.893189 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.16s 2025-09-08 00:58:07.893197 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.04s 2025-09-08 00:58:07.893205 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.66s 2025-09-08 00:58:07.893213 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.64s 2025-09-08 00:58:07.893221 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.53s 2025-09-08 00:58:07.893229 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.26s 2025-09-08 00:58:07.893237 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.26s 2025-09-08 00:58:07.893244 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.04s 2025-09-08 00:58:07.893252 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2025-09-08 00:58:07.893260 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-09-08 00:58:07.893268 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-09-08 00:58:07.893276 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2025-09-08 00:58:07.893284 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-09-08 00:58:07.893291 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-08 00:58:07.893299 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-09-08 00:58:07.893307 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-09-08 00:58:07.893315 | orchestrator | 2025-09-08 00:58:07 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:07.893323 | orchestrator | 2025-09-08 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:10.928234 | orchestrator | 2025-09-08 00:58:10 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:10.930256 | orchestrator | 2025-09-08 00:58:10 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:10.930305 | orchestrator | 2025-09-08 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:13.970239 | orchestrator | 2025-09-08 00:58:13 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:13.971199 | orchestrator | 2025-09-08 00:58:13 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:13.971239 | orchestrator | 2025-09-08 00:58:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:17.018396 | orchestrator | 2025-09-08 00:58:17 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:17.021259 | orchestrator | 2025-09-08 00:58:17 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:17.021319 | orchestrator | 2025-09-08 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:20.071671 | orchestrator | 2025-09-08 00:58:20 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:20.072604 | orchestrator | 2025-09-08 00:58:20 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:20.072862 | orchestrator | 2025-09-08 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:23.118801 | orchestrator | 2025-09-08 00:58:23 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:23.121118 | orchestrator | 2025-09-08 00:58:23 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:23.121172 | orchestrator | 2025-09-08 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:26.166576 | orchestrator | 2025-09-08 00:58:26 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:26.168306 | orchestrator | 2025-09-08 00:58:26 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:26.168637 | orchestrator | 2025-09-08 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:29.212315 | orchestrator | 2025-09-08 00:58:29 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:29.214003 | orchestrator | 2025-09-08 00:58:29 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:29.214093 | orchestrator | 2025-09-08 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:32.263082 | orchestrator | 2025-09-08 00:58:32 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:32.265832 | orchestrator | 2025-09-08 00:58:32 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:32.265864 | orchestrator | 2025-09-08 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:35.313894 | orchestrator | 2025-09-08 00:58:35 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:35.315259 | orchestrator | 2025-09-08 00:58:35 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:35.315372 | orchestrator | 2025-09-08 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:38.357226 | orchestrator | 2025-09-08 00:58:38 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:38.358836 | orchestrator | 2025-09-08 00:58:38 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:38.358869 | orchestrator | 2025-09-08 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:41.407600 | orchestrator | 2025-09-08 00:58:41 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state STARTED 2025-09-08 00:58:41.407808 | orchestrator | 2025-09-08 00:58:41 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:41.407839 | orchestrator | 2025-09-08 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:44.469741 | orchestrator | 2025-09-08 00:58:44 | INFO  | Task d0599852-227e-47b5-8c11-433a08fcaf2a is in state SUCCESS 2025-09-08 00:58:44.473062 | orchestrator | 2025-09-08 00:58:44 | INFO  | Task b0a94e90-3462-4ea6-9cd7-901ba8b4beb9 is in state STARTED 2025-09-08 00:58:44.475751 | orchestrator | 2025-09-08 00:58:44 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:58:44.477624 | orchestrator | 2025-09-08 00:58:44 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:58:44.479737 | orchestrator | 2025-09-08 00:58:44 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:44.479929 | orchestrator | 2025-09-08 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:47.555380 | orchestrator | 2025-09-08 00:58:47 | INFO  | Task b0a94e90-3462-4ea6-9cd7-901ba8b4beb9 is in state STARTED 2025-09-08 00:58:47.555733 | orchestrator | 2025-09-08 00:58:47 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:58:47.556835 | orchestrator | 2025-09-08 00:58:47 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:58:47.557791 | orchestrator | 2025-09-08 00:58:47 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:47.558079 | orchestrator | 2025-09-08 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:50.598228 | orchestrator | 2025-09-08 00:58:50 | INFO  | Task b0a94e90-3462-4ea6-9cd7-901ba8b4beb9 is in state SUCCESS 2025-09-08 00:58:50.598372 | orchestrator | 2025-09-08 00:58:50 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:58:50.598747 | orchestrator | 2025-09-08 00:58:50 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:58:50.601737 | orchestrator | 2025-09-08 00:58:50 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:50.601759 | orchestrator | 2025-09-08 00:58:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:53.654357 | orchestrator | 2025-09-08 00:58:53 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:58:53.654523 | orchestrator | 2025-09-08 00:58:53 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:58:53.654538 | orchestrator | 2025-09-08 00:58:53 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:58:53.655963 | orchestrator | 2025-09-08 00:58:53 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:58:53.657790 | orchestrator | 2025-09-08 00:58:53 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:53.657811 | orchestrator | 2025-09-08 00:58:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:56.688803 | orchestrator | 2025-09-08 00:58:56 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:58:56.688927 | orchestrator | 2025-09-08 00:58:56 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:58:56.689248 | orchestrator | 2025-09-08 00:58:56 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:58:56.690171 | orchestrator | 2025-09-08 00:58:56 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:58:56.690921 | orchestrator | 2025-09-08 00:58:56 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:56.690943 | orchestrator | 2025-09-08 00:58:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:58:59.734843 | orchestrator | 2025-09-08 00:58:59 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:58:59.734962 | orchestrator | 2025-09-08 00:58:59 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:58:59.734975 | orchestrator | 2025-09-08 00:58:59 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:58:59.738818 | orchestrator | 2025-09-08 00:58:59 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:58:59.738854 | orchestrator | 2025-09-08 00:58:59 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state STARTED 2025-09-08 00:58:59.738899 | orchestrator | 2025-09-08 00:58:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:02.782927 | orchestrator | 2025-09-08 00:59:02 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:02.784575 | orchestrator | 2025-09-08 00:59:02 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:02.785722 | orchestrator | 2025-09-08 00:59:02 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:02.788002 | orchestrator | 2025-09-08 00:59:02 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:02.789634 | orchestrator | 2025-09-08 00:59:02 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:02.794487 | orchestrator | 2025-09-08 00:59:02.795171 | orchestrator | 2025-09-08 00:59:02.795198 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-08 00:59:02.795211 | orchestrator | 2025-09-08 00:59:02.795223 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-08 00:59:02.795235 | orchestrator | Monday 08 September 2025 00:57:47 +0000 (0:00:00.241) 0:00:00.241 ****** 2025-09-08 00:59:02.795247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-08 00:59:02.795260 | orchestrator | 2025-09-08 00:59:02.795271 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-08 00:59:02.795282 | orchestrator | Monday 08 September 2025 00:57:47 +0000 (0:00:00.235) 0:00:00.477 ****** 2025-09-08 00:59:02.795294 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-08 00:59:02.795305 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-08 00:59:02.795317 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-08 00:59:02.795329 | orchestrator | 2025-09-08 00:59:02.795340 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-08 00:59:02.795351 | orchestrator | Monday 08 September 2025 00:57:49 +0000 (0:00:01.245) 0:00:01.722 ****** 2025-09-08 00:59:02.795379 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-08 00:59:02.795391 | orchestrator | 2025-09-08 00:59:02.795463 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-08 00:59:02.795476 | orchestrator | Monday 08 September 2025 00:57:50 +0000 (0:00:01.181) 0:00:02.904 ****** 2025-09-08 00:59:02.795487 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:02.795498 | orchestrator | 2025-09-08 00:59:02.795509 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-08 00:59:02.795520 | orchestrator | Monday 08 September 2025 00:57:51 +0000 (0:00:00.974) 0:00:03.879 ****** 2025-09-08 00:59:02.795531 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:02.795542 | orchestrator | 2025-09-08 00:59:02.795553 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-08 00:59:02.795563 | orchestrator | Monday 08 September 2025 00:57:52 +0000 (0:00:00.841) 0:00:04.720 ****** 2025-09-08 00:59:02.795574 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-08 00:59:02.795585 | orchestrator | ok: [testbed-manager] 2025-09-08 00:59:02.795596 | orchestrator | 2025-09-08 00:59:02.795607 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-08 00:59:02.795619 | orchestrator | Monday 08 September 2025 00:58:33 +0000 (0:00:40.850) 0:00:45.570 ****** 2025-09-08 00:59:02.795630 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-08 00:59:02.795641 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-08 00:59:02.795652 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-08 00:59:02.795690 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-08 00:59:02.795701 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-08 00:59:02.795712 | orchestrator | 2025-09-08 00:59:02.795723 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-08 00:59:02.795733 | orchestrator | Monday 08 September 2025 00:58:37 +0000 (0:00:04.198) 0:00:49.769 ****** 2025-09-08 00:59:02.795744 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-08 00:59:02.795756 | orchestrator | 2025-09-08 00:59:02.795767 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-08 00:59:02.795779 | orchestrator | Monday 08 September 2025 00:58:37 +0000 (0:00:00.487) 0:00:50.256 ****** 2025-09-08 00:59:02.795793 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:59:02.795806 | orchestrator | 2025-09-08 00:59:02.795819 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-08 00:59:02.795832 | orchestrator | Monday 08 September 2025 00:58:37 +0000 (0:00:00.147) 0:00:50.404 ****** 2025-09-08 00:59:02.795845 | orchestrator | skipping: [testbed-manager] 2025-09-08 00:59:02.795859 | orchestrator | 2025-09-08 00:59:02.795872 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-08 00:59:02.795885 | orchestrator | Monday 08 September 2025 00:58:38 +0000 (0:00:00.304) 0:00:50.709 ****** 2025-09-08 00:59:02.795897 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:02.795909 | orchestrator | 2025-09-08 00:59:02.795922 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-08 00:59:02.795934 | orchestrator | Monday 08 September 2025 00:58:39 +0000 (0:00:01.750) 0:00:52.459 ****** 2025-09-08 00:59:02.795948 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:02.795961 | orchestrator | 2025-09-08 00:59:02.795974 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-08 00:59:02.795986 | orchestrator | Monday 08 September 2025 00:58:40 +0000 (0:00:00.800) 0:00:53.260 ****** 2025-09-08 00:59:02.796000 | orchestrator | changed: [testbed-manager] 2025-09-08 00:59:02.796014 | orchestrator | 2025-09-08 00:59:02.796027 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-08 00:59:02.796041 | orchestrator | Monday 08 September 2025 00:58:41 +0000 (0:00:00.665) 0:00:53.925 ****** 2025-09-08 00:59:02.796054 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-08 00:59:02.796067 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-08 00:59:02.796081 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-08 00:59:02.796094 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-08 00:59:02.796107 | orchestrator | 2025-09-08 00:59:02.796120 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:59:02.796133 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-08 00:59:02.796145 | orchestrator | 2025-09-08 00:59:02.796156 | orchestrator | 2025-09-08 00:59:02.796213 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:59:02.796226 | orchestrator | Monday 08 September 2025 00:58:42 +0000 (0:00:01.546) 0:00:55.471 ****** 2025-09-08 00:59:02.796237 | orchestrator | =============================================================================== 2025-09-08 00:59:02.796248 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.85s 2025-09-08 00:59:02.796259 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.20s 2025-09-08 00:59:02.796270 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.75s 2025-09-08 00:59:02.796281 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.55s 2025-09-08 00:59:02.796292 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2025-09-08 00:59:02.796303 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.18s 2025-09-08 00:59:02.796314 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2025-09-08 00:59:02.796333 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.84s 2025-09-08 00:59:02.796344 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2025-09-08 00:59:02.796355 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2025-09-08 00:59:02.796373 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2025-09-08 00:59:02.796384 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-09-08 00:59:02.796395 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-09-08 00:59:02.796426 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-09-08 00:59:02.796437 | orchestrator | 2025-09-08 00:59:02.796447 | orchestrator | 2025-09-08 00:59:02.796458 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:59:02.796469 | orchestrator | 2025-09-08 00:59:02.796480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:59:02.796491 | orchestrator | Monday 08 September 2025 00:58:47 +0000 (0:00:00.200) 0:00:00.200 ****** 2025-09-08 00:59:02.796502 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.796513 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.796524 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.796535 | orchestrator | 2025-09-08 00:59:02.796546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:59:02.796557 | orchestrator | Monday 08 September 2025 00:58:47 +0000 (0:00:00.337) 0:00:00.537 ****** 2025-09-08 00:59:02.796568 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-08 00:59:02.796579 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-08 00:59:02.796590 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-08 00:59:02.796601 | orchestrator | 2025-09-08 00:59:02.796612 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-08 00:59:02.796623 | orchestrator | 2025-09-08 00:59:02.796633 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-08 00:59:02.796644 | orchestrator | Monday 08 September 2025 00:58:48 +0000 (0:00:00.831) 0:00:01.369 ****** 2025-09-08 00:59:02.796655 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.796666 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.796677 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.796687 | orchestrator | 2025-09-08 00:59:02.796698 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:59:02.796710 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:59:02.796721 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:59:02.796733 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 00:59:02.796744 | orchestrator | 2025-09-08 00:59:02.796754 | orchestrator | 2025-09-08 00:59:02.796765 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:59:02.796776 | orchestrator | Monday 08 September 2025 00:58:49 +0000 (0:00:00.664) 0:00:02.034 ****** 2025-09-08 00:59:02.796787 | orchestrator | =============================================================================== 2025-09-08 00:59:02.796798 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2025-09-08 00:59:02.796809 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.67s 2025-09-08 00:59:02.796819 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-09-08 00:59:02.796830 | orchestrator | 2025-09-08 00:59:02.796841 | orchestrator | 2025-09-08 00:59:02.796852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 00:59:02.796863 | orchestrator | 2025-09-08 00:59:02.796882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 00:59:02.796893 | orchestrator | Monday 08 September 2025 00:56:14 +0000 (0:00:00.261) 0:00:00.262 ****** 2025-09-08 00:59:02.796904 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.796915 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.796926 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.796936 | orchestrator | 2025-09-08 00:59:02.796948 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 00:59:02.796958 | orchestrator | Monday 08 September 2025 00:56:14 +0000 (0:00:00.278) 0:00:00.541 ****** 2025-09-08 00:59:02.796969 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-08 00:59:02.796980 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-08 00:59:02.796991 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-08 00:59:02.797002 | orchestrator | 2025-09-08 00:59:02.797013 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-08 00:59:02.797024 | orchestrator | 2025-09-08 00:59:02.797066 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:02.797079 | orchestrator | Monday 08 September 2025 00:56:15 +0000 (0:00:00.409) 0:00:00.950 ****** 2025-09-08 00:59:02.797090 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:59:02.797102 | orchestrator | 2025-09-08 00:59:02.797113 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-08 00:59:02.797123 | orchestrator | Monday 08 September 2025 00:56:15 +0000 (0:00:00.541) 0:00:01.491 ****** 2025-09-08 00:59:02.797148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.797166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.797180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.797232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797365 | orchestrator | 2025-09-08 00:59:02.797383 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-08 00:59:02.797470 | orchestrator | Monday 08 September 2025 00:56:17 +0000 (0:00:01.660) 0:00:03.151 ****** 2025-09-08 00:59:02.797491 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-08 00:59:02.797509 | orchestrator | 2025-09-08 00:59:02.797525 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-08 00:59:02.797535 | orchestrator | Monday 08 September 2025 00:56:18 +0000 (0:00:00.783) 0:00:03.935 ****** 2025-09-08 00:59:02.797545 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.797555 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.797565 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.797575 | orchestrator | 2025-09-08 00:59:02.797585 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-08 00:59:02.797594 | orchestrator | Monday 08 September 2025 00:56:18 +0000 (0:00:00.396) 0:00:04.331 ****** 2025-09-08 00:59:02.797604 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:59:02.797614 | orchestrator | 2025-09-08 00:59:02.797624 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:02.797671 | orchestrator | Monday 08 September 2025 00:56:19 +0000 (0:00:00.705) 0:00:05.036 ****** 2025-09-08 00:59:02.797682 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:59:02.797692 | orchestrator | 2025-09-08 00:59:02.797702 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-08 00:59:02.797711 | orchestrator | Monday 08 September 2025 00:56:19 +0000 (0:00:00.575) 0:00:05.612 ****** 2025-09-08 00:59:02.797729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.797742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.797762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.797773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.797854 | orchestrator | 2025-09-08 00:59:02.797864 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-08 00:59:02.797874 | orchestrator | Monday 08 September 2025 00:56:23 +0000 (0:00:03.646) 0:00:09.258 ****** 2025-09-08 00:59:02.797893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:02.797904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.797925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:02.797935 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.797946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:02.797963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.797973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:02.797983 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.798002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:02.798059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:02.798091 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.798101 | orchestrator | 2025-09-08 00:59:02.798111 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-08 00:59:02.798120 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:00.550) 0:00:09.809 ****** 2025-09-08 00:59:02.798131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:02.798142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fe2025-09-08 00:59:02 | INFO  | Task 0203c514-2cf9-48a5-92c7-71238e8058bb is in state SUCCESS 2025-09-08 00:59:02.798174 | orchestrator | rnet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:02.798184 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.798199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:02.798217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:02.798237 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.798248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-08 00:59:02.798267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-08 00:59:02.798299 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.798309 | orchestrator | 2025-09-08 00:59:02.798319 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-08 00:59:02.798329 | orchestrator | Monday 08 September 2025 00:56:24 +0000 (0:00:00.745) 0:00:10.554 ****** 2025-09-08 00:59:02.798339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798472 | orchestrator | 2025-09-08 00:59:02.798482 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-08 00:59:02.798497 | orchestrator | Monday 08 September 2025 00:56:28 +0000 (0:00:03.578) 0:00:14.133 ****** 2025-09-08 00:59:02.798513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.798631 | orchestrator | 2025-09-08 00:59:02.798641 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-08 00:59:02.798651 | orchestrator | Monday 08 September 2025 00:56:33 +0000 (0:00:05.320) 0:00:19.453 ****** 2025-09-08 00:59:02.798660 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.798671 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:02.798680 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:02.798690 | orchestrator | 2025-09-08 00:59:02.798700 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-08 00:59:02.798710 | orchestrator | Monday 08 September 2025 00:56:35 +0000 (0:00:01.459) 0:00:20.913 ****** 2025-09-08 00:59:02.798719 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.798729 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.798739 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.798748 | orchestrator | 2025-09-08 00:59:02.798758 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-08 00:59:02.798768 | orchestrator | Monday 08 September 2025 00:56:35 +0000 (0:00:00.574) 0:00:21.487 ****** 2025-09-08 00:59:02.798777 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.798787 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.798797 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.798813 | orchestrator | 2025-09-08 00:59:02.798822 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-08 00:59:02.798832 | orchestrator | Monday 08 September 2025 00:56:36 +0000 (0:00:00.291) 0:00:21.779 ****** 2025-09-08 00:59:02.798842 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.798852 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.798861 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.798871 | orchestrator | 2025-09-08 00:59:02.798880 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-08 00:59:02.798895 | orchestrator | Monday 08 September 2025 00:56:36 +0000 (0:00:00.598) 0:00:22.378 ****** 2025-09-08 00:59:02.798914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.798981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-08 00:59:02.798996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799027 | orchestrator | 2025-09-08 00:59:02.799037 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:02.799047 | orchestrator | Monday 08 September 2025 00:56:39 +0000 (0:00:02.503) 0:00:24.882 ****** 2025-09-08 00:59:02.799057 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.799067 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.799082 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.799092 | orchestrator | 2025-09-08 00:59:02.799102 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-08 00:59:02.799111 | orchestrator | Monday 08 September 2025 00:56:39 +0000 (0:00:00.303) 0:00:25.186 ****** 2025-09-08 00:59:02.799121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-08 00:59:02.799130 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-08 00:59:02.799140 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-08 00:59:02.799150 | orchestrator | 2025-09-08 00:59:02.799159 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-08 00:59:02.799169 | orchestrator | Monday 08 September 2025 00:56:41 +0000 (0:00:02.044) 0:00:27.230 ****** 2025-09-08 00:59:02.799179 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:59:02.799188 | orchestrator | 2025-09-08 00:59:02.799198 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-08 00:59:02.799207 | orchestrator | Monday 08 September 2025 00:56:42 +0000 (0:00:01.340) 0:00:28.571 ****** 2025-09-08 00:59:02.799217 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.799227 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.799236 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.799246 | orchestrator | 2025-09-08 00:59:02.799256 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-08 00:59:02.799271 | orchestrator | Monday 08 September 2025 00:56:43 +0000 (0:00:00.599) 0:00:29.171 ****** 2025-09-08 00:59:02.799281 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 00:59:02.799290 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 00:59:02.799300 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 00:59:02.799310 | orchestrator | 2025-09-08 00:59:02.799319 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-08 00:59:02.799329 | orchestrator | Monday 08 September 2025 00:56:44 +0000 (0:00:01.032) 0:00:30.204 ****** 2025-09-08 00:59:02.799339 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.799349 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.799358 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.799368 | orchestrator | 2025-09-08 00:59:02.799378 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-08 00:59:02.799387 | orchestrator | Monday 08 September 2025 00:56:44 +0000 (0:00:00.308) 0:00:30.512 ****** 2025-09-08 00:59:02.799441 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-08 00:59:02.799453 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-08 00:59:02.799463 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-08 00:59:02.799472 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-08 00:59:02.799487 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-08 00:59:02.799497 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-08 00:59:02.799507 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-08 00:59:02.799517 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-08 00:59:02.799527 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-08 00:59:02.799537 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-08 00:59:02.799546 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-08 00:59:02.799562 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-08 00:59:02.799572 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-08 00:59:02.799582 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-08 00:59:02.799591 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-08 00:59:02.799601 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 00:59:02.799611 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 00:59:02.799620 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 00:59:02.799630 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 00:59:02.799640 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 00:59:02.799650 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 00:59:02.799659 | orchestrator | 2025-09-08 00:59:02.799669 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-08 00:59:02.799679 | orchestrator | Monday 08 September 2025 00:56:54 +0000 (0:00:09.549) 0:00:40.062 ****** 2025-09-08 00:59:02.799688 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 00:59:02.799698 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 00:59:02.799708 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 00:59:02.799717 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 00:59:02.799727 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 00:59:02.799737 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 00:59:02.799746 | orchestrator | 2025-09-08 00:59:02.799756 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-08 00:59:02.799766 | orchestrator | Monday 08 September 2025 00:56:57 +0000 (0:00:02.776) 0:00:42.838 ****** 2025-09-08 00:59:02.799785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.799802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.799819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-08 00:59:02.799831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-08 00:59:02.799909 | orchestrator | 2025-09-08 00:59:02.799919 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:02.799929 | orchestrator | Monday 08 September 2025 00:56:59 +0000 (0:00:02.427) 0:00:45.266 ****** 2025-09-08 00:59:02.799939 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.799949 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.799959 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.799968 | orchestrator | 2025-09-08 00:59:02.799976 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-08 00:59:02.799984 | orchestrator | Monday 08 September 2025 00:56:59 +0000 (0:00:00.272) 0:00:45.538 ****** 2025-09-08 00:59:02.799992 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800000 | orchestrator | 2025-09-08 00:59:02.800008 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-08 00:59:02.800016 | orchestrator | Monday 08 September 2025 00:57:02 +0000 (0:00:02.179) 0:00:47.718 ****** 2025-09-08 00:59:02.800024 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800032 | orchestrator | 2025-09-08 00:59:02.800040 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-08 00:59:02.800048 | orchestrator | Monday 08 September 2025 00:57:04 +0000 (0:00:02.129) 0:00:49.848 ****** 2025-09-08 00:59:02.800056 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.800064 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.800072 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.800080 | orchestrator | 2025-09-08 00:59:02.800088 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-08 00:59:02.800096 | orchestrator | Monday 08 September 2025 00:57:05 +0000 (0:00:01.121) 0:00:50.970 ****** 2025-09-08 00:59:02.800103 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.800111 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.800119 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.800127 | orchestrator | 2025-09-08 00:59:02.800135 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-08 00:59:02.800143 | orchestrator | Monday 08 September 2025 00:57:05 +0000 (0:00:00.368) 0:00:51.338 ****** 2025-09-08 00:59:02.800151 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.800159 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.800167 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.800175 | orchestrator | 2025-09-08 00:59:02.800183 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-08 00:59:02.800191 | orchestrator | Monday 08 September 2025 00:57:06 +0000 (0:00:00.367) 0:00:51.706 ****** 2025-09-08 00:59:02.800199 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800212 | orchestrator | 2025-09-08 00:59:02.800220 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-08 00:59:02.800232 | orchestrator | Monday 08 September 2025 00:57:20 +0000 (0:00:14.334) 0:01:06.041 ****** 2025-09-08 00:59:02.800241 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800249 | orchestrator | 2025-09-08 00:59:02.800257 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-08 00:59:02.800265 | orchestrator | Monday 08 September 2025 00:57:30 +0000 (0:00:09.991) 0:01:16.032 ****** 2025-09-08 00:59:02.800273 | orchestrator | 2025-09-08 00:59:02.800281 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-08 00:59:02.800289 | orchestrator | Monday 08 September 2025 00:57:30 +0000 (0:00:00.067) 0:01:16.099 ****** 2025-09-08 00:59:02.800296 | orchestrator | 2025-09-08 00:59:02.800304 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-08 00:59:02.800312 | orchestrator | Monday 08 September 2025 00:57:30 +0000 (0:00:00.256) 0:01:16.356 ****** 2025-09-08 00:59:02.800320 | orchestrator | 2025-09-08 00:59:02.800328 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-08 00:59:02.800336 | orchestrator | Monday 08 September 2025 00:57:30 +0000 (0:00:00.069) 0:01:16.426 ****** 2025-09-08 00:59:02.800344 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800352 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:02.800359 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:02.800367 | orchestrator | 2025-09-08 00:59:02.800375 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-08 00:59:02.800383 | orchestrator | Monday 08 September 2025 00:57:50 +0000 (0:00:19.929) 0:01:36.355 ****** 2025-09-08 00:59:02.800391 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800417 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:02.800426 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:02.800434 | orchestrator | 2025-09-08 00:59:02.800442 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-08 00:59:02.800450 | orchestrator | Monday 08 September 2025 00:58:01 +0000 (0:00:10.582) 0:01:46.938 ****** 2025-09-08 00:59:02.800458 | orchestrator | changed: [testbed-node-1] 2025-09-08 00:59:02.800466 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800473 | orchestrator | changed: [testbed-node-2] 2025-09-08 00:59:02.800481 | orchestrator | 2025-09-08 00:59:02.800489 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:02.800497 | orchestrator | Monday 08 September 2025 00:58:13 +0000 (0:00:11.724) 0:01:58.663 ****** 2025-09-08 00:59:02.800505 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 00:59:02.800513 | orchestrator | 2025-09-08 00:59:02.800521 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-08 00:59:02.800530 | orchestrator | Monday 08 September 2025 00:58:13 +0000 (0:00:00.767) 0:01:59.430 ****** 2025-09-08 00:59:02.800537 | orchestrator | ok: [testbed-node-1] 2025-09-08 00:59:02.800546 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.800554 | orchestrator | ok: [testbed-node-2] 2025-09-08 00:59:02.800562 | orchestrator | 2025-09-08 00:59:02.800570 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-08 00:59:02.800577 | orchestrator | Monday 08 September 2025 00:58:14 +0000 (0:00:00.767) 0:02:00.198 ****** 2025-09-08 00:59:02.800585 | orchestrator | changed: [testbed-node-0] 2025-09-08 00:59:02.800593 | orchestrator | 2025-09-08 00:59:02.800601 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-08 00:59:02.800609 | orchestrator | Monday 08 September 2025 00:58:16 +0000 (0:00:01.843) 0:02:02.042 ****** 2025-09-08 00:59:02.800617 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-08 00:59:02.800625 | orchestrator | 2025-09-08 00:59:02.800633 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-08 00:59:02.800647 | orchestrator | Monday 08 September 2025 00:58:26 +0000 (0:00:10.280) 0:02:12.323 ****** 2025-09-08 00:59:02.800655 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-08 00:59:02.800663 | orchestrator | 2025-09-08 00:59:02.800671 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-08 00:59:02.800679 | orchestrator | Monday 08 September 2025 00:58:47 +0000 (0:00:20.507) 0:02:32.831 ****** 2025-09-08 00:59:02.800687 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-08 00:59:02.800695 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-08 00:59:02.800703 | orchestrator | 2025-09-08 00:59:02.800711 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-08 00:59:02.800719 | orchestrator | Monday 08 September 2025 00:58:53 +0000 (0:00:06.758) 0:02:39.589 ****** 2025-09-08 00:59:02.800727 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.800734 | orchestrator | 2025-09-08 00:59:02.800742 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-08 00:59:02.800750 | orchestrator | Monday 08 September 2025 00:58:54 +0000 (0:00:00.127) 0:02:39.716 ****** 2025-09-08 00:59:02.800758 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.800766 | orchestrator | 2025-09-08 00:59:02.800774 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-08 00:59:02.800782 | orchestrator | Monday 08 September 2025 00:58:54 +0000 (0:00:00.444) 0:02:40.161 ****** 2025-09-08 00:59:02.800790 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.800798 | orchestrator | 2025-09-08 00:59:02.800806 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-08 00:59:02.800814 | orchestrator | Monday 08 September 2025 00:58:54 +0000 (0:00:00.134) 0:02:40.295 ****** 2025-09-08 00:59:02.800822 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.800830 | orchestrator | 2025-09-08 00:59:02.800838 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-08 00:59:02.800846 | orchestrator | Monday 08 September 2025 00:58:55 +0000 (0:00:00.391) 0:02:40.687 ****** 2025-09-08 00:59:02.800854 | orchestrator | ok: [testbed-node-0] 2025-09-08 00:59:02.800862 | orchestrator | 2025-09-08 00:59:02.800870 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-08 00:59:02.800883 | orchestrator | Monday 08 September 2025 00:58:58 +0000 (0:00:03.227) 0:02:43.915 ****** 2025-09-08 00:59:02.800891 | orchestrator | skipping: [testbed-node-0] 2025-09-08 00:59:02.800899 | orchestrator | skipping: [testbed-node-1] 2025-09-08 00:59:02.800907 | orchestrator | skipping: [testbed-node-2] 2025-09-08 00:59:02.800915 | orchestrator | 2025-09-08 00:59:02.800923 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 00:59:02.800932 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-08 00:59:02.800941 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-08 00:59:02.800949 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-08 00:59:02.800957 | orchestrator | 2025-09-08 00:59:02.800965 | orchestrator | 2025-09-08 00:59:02.800973 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 00:59:02.800981 | orchestrator | Monday 08 September 2025 00:58:59 +0000 (0:00:01.036) 0:02:44.951 ****** 2025-09-08 00:59:02.800989 | orchestrator | =============================================================================== 2025-09-08 00:59:02.800997 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.51s 2025-09-08 00:59:02.801009 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.93s 2025-09-08 00:59:02.801017 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.33s 2025-09-08 00:59:02.801030 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.72s 2025-09-08 00:59:02.801038 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.58s 2025-09-08 00:59:02.801046 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.28s 2025-09-08 00:59:02.801054 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.99s 2025-09-08 00:59:02.801061 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.55s 2025-09-08 00:59:02.801069 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.76s 2025-09-08 00:59:02.801077 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.32s 2025-09-08 00:59:02.801085 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.65s 2025-09-08 00:59:02.801093 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.58s 2025-09-08 00:59:02.801101 | orchestrator | keystone : Creating default user role ----------------------------------- 3.23s 2025-09-08 00:59:02.801109 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.78s 2025-09-08 00:59:02.801117 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.50s 2025-09-08 00:59:02.801125 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.43s 2025-09-08 00:59:02.801133 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.18s 2025-09-08 00:59:02.801141 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.13s 2025-09-08 00:59:02.801149 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.04s 2025-09-08 00:59:02.801157 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.84s 2025-09-08 00:59:02.801165 | orchestrator | 2025-09-08 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:05.827599 | orchestrator | 2025-09-08 00:59:05 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:05.828686 | orchestrator | 2025-09-08 00:59:05 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:05.829979 | orchestrator | 2025-09-08 00:59:05 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:05.832284 | orchestrator | 2025-09-08 00:59:05 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:05.832775 | orchestrator | 2025-09-08 00:59:05 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:05.832799 | orchestrator | 2025-09-08 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:08.865842 | orchestrator | 2025-09-08 00:59:08 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:08.867803 | orchestrator | 2025-09-08 00:59:08 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:08.868929 | orchestrator | 2025-09-08 00:59:08 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:08.870813 | orchestrator | 2025-09-08 00:59:08 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:08.872201 | orchestrator | 2025-09-08 00:59:08 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:08.872343 | orchestrator | 2025-09-08 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:11.917669 | orchestrator | 2025-09-08 00:59:11 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:11.919713 | orchestrator | 2025-09-08 00:59:11 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:11.921842 | orchestrator | 2025-09-08 00:59:11 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:11.924080 | orchestrator | 2025-09-08 00:59:11 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:11.925621 | orchestrator | 2025-09-08 00:59:11 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:11.925685 | orchestrator | 2025-09-08 00:59:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:14.969140 | orchestrator | 2025-09-08 00:59:14 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:14.969264 | orchestrator | 2025-09-08 00:59:14 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:14.969279 | orchestrator | 2025-09-08 00:59:14 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:14.969314 | orchestrator | 2025-09-08 00:59:14 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:14.969326 | orchestrator | 2025-09-08 00:59:14 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:14.969338 | orchestrator | 2025-09-08 00:59:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:18.024075 | orchestrator | 2025-09-08 00:59:18 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:18.024765 | orchestrator | 2025-09-08 00:59:18 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:18.027075 | orchestrator | 2025-09-08 00:59:18 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:18.029121 | orchestrator | 2025-09-08 00:59:18 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:18.031471 | orchestrator | 2025-09-08 00:59:18 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:18.031565 | orchestrator | 2025-09-08 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:21.527306 | orchestrator | 2025-09-08 00:59:21 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:21.527482 | orchestrator | 2025-09-08 00:59:21 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:21.527497 | orchestrator | 2025-09-08 00:59:21 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:21.527510 | orchestrator | 2025-09-08 00:59:21 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:21.527521 | orchestrator | 2025-09-08 00:59:21 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:21.527533 | orchestrator | 2025-09-08 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:24.145193 | orchestrator | 2025-09-08 00:59:24 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:24.145327 | orchestrator | 2025-09-08 00:59:24 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:24.145343 | orchestrator | 2025-09-08 00:59:24 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:24.145355 | orchestrator | 2025-09-08 00:59:24 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:24.145367 | orchestrator | 2025-09-08 00:59:24 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:24.145380 | orchestrator | 2025-09-08 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:27.268007 | orchestrator | 2025-09-08 00:59:27 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state STARTED 2025-09-08 00:59:27.268158 | orchestrator | 2025-09-08 00:59:27 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:27.268175 | orchestrator | 2025-09-08 00:59:27 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:27.268187 | orchestrator | 2025-09-08 00:59:27 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:27.268198 | orchestrator | 2025-09-08 00:59:27 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:27.268210 | orchestrator | 2025-09-08 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:30.204751 | orchestrator | 2025-09-08 00:59:30 | INFO  | Task b048f93b-f98e-4c6e-8a8f-f8b0cb655945 is in state SUCCESS 2025-09-08 00:59:30.204877 | orchestrator | 2025-09-08 00:59:30 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:30.204892 | orchestrator | 2025-09-08 00:59:30 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:30.204903 | orchestrator | 2025-09-08 00:59:30 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:30.204915 | orchestrator | 2025-09-08 00:59:30 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:30.204927 | orchestrator | 2025-09-08 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:33.229663 | orchestrator | 2025-09-08 00:59:33 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:33.229942 | orchestrator | 2025-09-08 00:59:33 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:33.230488 | orchestrator | 2025-09-08 00:59:33 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:33.231196 | orchestrator | 2025-09-08 00:59:33 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:33.231931 | orchestrator | 2025-09-08 00:59:33 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:33.231959 | orchestrator | 2025-09-08 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:36.260025 | orchestrator | 2025-09-08 00:59:36 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:36.260181 | orchestrator | 2025-09-08 00:59:36 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:36.260537 | orchestrator | 2025-09-08 00:59:36 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:36.261329 | orchestrator | 2025-09-08 00:59:36 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:36.262216 | orchestrator | 2025-09-08 00:59:36 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:36.262239 | orchestrator | 2025-09-08 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:39.295336 | orchestrator | 2025-09-08 00:59:39 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:39.295549 | orchestrator | 2025-09-08 00:59:39 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:39.295726 | orchestrator | 2025-09-08 00:59:39 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:39.296428 | orchestrator | 2025-09-08 00:59:39 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:39.297381 | orchestrator | 2025-09-08 00:59:39 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:39.297476 | orchestrator | 2025-09-08 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:42.331147 | orchestrator | 2025-09-08 00:59:42 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:42.331797 | orchestrator | 2025-09-08 00:59:42 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:42.339586 | orchestrator | 2025-09-08 00:59:42 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:42.339614 | orchestrator | 2025-09-08 00:59:42 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:42.339626 | orchestrator | 2025-09-08 00:59:42 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:42.339637 | orchestrator | 2025-09-08 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:45.373653 | orchestrator | 2025-09-08 00:59:45 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:45.374218 | orchestrator | 2025-09-08 00:59:45 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:45.375384 | orchestrator | 2025-09-08 00:59:45 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:45.376221 | orchestrator | 2025-09-08 00:59:45 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:45.377051 | orchestrator | 2025-09-08 00:59:45 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:45.377186 | orchestrator | 2025-09-08 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:48.405824 | orchestrator | 2025-09-08 00:59:48 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:48.406123 | orchestrator | 2025-09-08 00:59:48 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:48.407252 | orchestrator | 2025-09-08 00:59:48 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:48.407869 | orchestrator | 2025-09-08 00:59:48 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:48.409339 | orchestrator | 2025-09-08 00:59:48 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:48.409365 | orchestrator | 2025-09-08 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:51.439786 | orchestrator | 2025-09-08 00:59:51 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:51.442491 | orchestrator | 2025-09-08 00:59:51 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:51.442541 | orchestrator | 2025-09-08 00:59:51 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:51.442555 | orchestrator | 2025-09-08 00:59:51 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:51.442567 | orchestrator | 2025-09-08 00:59:51 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:51.442580 | orchestrator | 2025-09-08 00:59:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:54.465322 | orchestrator | 2025-09-08 00:59:54 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:54.465462 | orchestrator | 2025-09-08 00:59:54 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:54.465990 | orchestrator | 2025-09-08 00:59:54 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:54.466626 | orchestrator | 2025-09-08 00:59:54 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:54.467295 | orchestrator | 2025-09-08 00:59:54 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:54.467317 | orchestrator | 2025-09-08 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 00:59:57.497638 | orchestrator | 2025-09-08 00:59:57 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 00:59:57.497774 | orchestrator | 2025-09-08 00:59:57 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 00:59:57.498327 | orchestrator | 2025-09-08 00:59:57 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 00:59:57.499058 | orchestrator | 2025-09-08 00:59:57 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 00:59:57.499681 | orchestrator | 2025-09-08 00:59:57 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 00:59:57.500449 | orchestrator | 2025-09-08 00:59:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:00.529828 | orchestrator | 2025-09-08 01:00:00 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:00.530716 | orchestrator | 2025-09-08 01:00:00 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 01:00:00.531431 | orchestrator | 2025-09-08 01:00:00 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:00.532258 | orchestrator | 2025-09-08 01:00:00 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:00.533078 | orchestrator | 2025-09-08 01:00:00 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:00.533103 | orchestrator | 2025-09-08 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:03.566260 | orchestrator | 2025-09-08 01:00:03 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:03.567498 | orchestrator | 2025-09-08 01:00:03 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 01:00:03.567527 | orchestrator | 2025-09-08 01:00:03 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:03.568060 | orchestrator | 2025-09-08 01:00:03 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:03.568822 | orchestrator | 2025-09-08 01:00:03 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:03.568843 | orchestrator | 2025-09-08 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:06.592013 | orchestrator | 2025-09-08 01:00:06 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:06.592423 | orchestrator | 2025-09-08 01:00:06 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state STARTED 2025-09-08 01:00:06.593079 | orchestrator | 2025-09-08 01:00:06 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:06.593873 | orchestrator | 2025-09-08 01:00:06 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:06.594793 | orchestrator | 2025-09-08 01:00:06 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:06.594867 | orchestrator | 2025-09-08 01:00:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:09.618813 | orchestrator | 2025-09-08 01:00:09 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:09.618968 | orchestrator | 2025-09-08 01:00:09 | INFO  | Task afd23823-e9d9-4fa7-95c9-f0ba0337f901 is in state SUCCESS 2025-09-08 01:00:09.619993 | orchestrator | 2025-09-08 01:00:09.620012 | orchestrator | 2025-09-08 01:00:09.620020 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:00:09.620027 | orchestrator | 2025-09-08 01:00:09.620035 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:00:09.620042 | orchestrator | Monday 08 September 2025 00:58:55 +0000 (0:00:00.317) 0:00:00.317 ****** 2025-09-08 01:00:09.620050 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:00:09.620058 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:00:09.620065 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:00:09.620072 | orchestrator | ok: [testbed-manager] 2025-09-08 01:00:09.620079 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:00:09.620086 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:00:09.620093 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:00:09.620101 | orchestrator | 2025-09-08 01:00:09.620109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:00:09.620116 | orchestrator | Monday 08 September 2025 00:58:56 +0000 (0:00:00.971) 0:00:01.289 ****** 2025-09-08 01:00:09.620123 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:09.620131 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:09.620138 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:09.620145 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:09.620152 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:09.620159 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:09.620167 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-08 01:00:09.620174 | orchestrator | 2025-09-08 01:00:09.620181 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-08 01:00:09.620189 | orchestrator | 2025-09-08 01:00:09.620196 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-08 01:00:09.620203 | orchestrator | Monday 08 September 2025 00:58:57 +0000 (0:00:00.748) 0:00:02.038 ****** 2025-09-08 01:00:09.620211 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:00:09.620219 | orchestrator | 2025-09-08 01:00:09.620226 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-08 01:00:09.620233 | orchestrator | Monday 08 September 2025 00:59:00 +0000 (0:00:03.085) 0:00:05.123 ****** 2025-09-08 01:00:09.620241 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-08 01:00:09.620248 | orchestrator | 2025-09-08 01:00:09.620255 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-08 01:00:09.620262 | orchestrator | Monday 08 September 2025 00:59:03 +0000 (0:00:03.323) 0:00:08.447 ****** 2025-09-08 01:00:09.620270 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-08 01:00:09.620278 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-08 01:00:09.620285 | orchestrator | 2025-09-08 01:00:09.620318 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-08 01:00:09.620326 | orchestrator | Monday 08 September 2025 00:59:09 +0000 (0:00:05.577) 0:00:14.024 ****** 2025-09-08 01:00:09.620334 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:00:09.620341 | orchestrator | 2025-09-08 01:00:09.620349 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-08 01:00:09.620453 | orchestrator | Monday 08 September 2025 00:59:12 +0000 (0:00:02.919) 0:00:16.944 ****** 2025-09-08 01:00:09.620461 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:00:09.620469 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-08 01:00:09.620486 | orchestrator | 2025-09-08 01:00:09.620494 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-08 01:00:09.620502 | orchestrator | Monday 08 September 2025 00:59:15 +0000 (0:00:03.543) 0:00:20.487 ****** 2025-09-08 01:00:09.620509 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:00:09.620517 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-08 01:00:09.620525 | orchestrator | 2025-09-08 01:00:09.620533 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-08 01:00:09.620540 | orchestrator | Monday 08 September 2025 00:59:22 +0000 (0:00:06.283) 0:00:26.771 ****** 2025-09-08 01:00:09.620548 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-08 01:00:09.620556 | orchestrator | 2025-09-08 01:00:09.620563 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:00:09.620571 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.620579 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.620587 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.620595 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.620603 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.620623 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.620632 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.620640 | orchestrator | 2025-09-08 01:00:09.620648 | orchestrator | 2025-09-08 01:00:09.620655 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:00:09.620663 | orchestrator | Monday 08 September 2025 00:59:29 +0000 (0:00:07.230) 0:00:34.001 ****** 2025-09-08 01:00:09.620671 | orchestrator | =============================================================================== 2025-09-08 01:00:09.620679 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 7.23s 2025-09-08 01:00:09.620686 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.28s 2025-09-08 01:00:09.620694 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.58s 2025-09-08 01:00:09.620701 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.54s 2025-09-08 01:00:09.620709 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.32s 2025-09-08 01:00:09.620716 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.09s 2025-09-08 01:00:09.620724 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.92s 2025-09-08 01:00:09.620732 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.97s 2025-09-08 01:00:09.620739 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-09-08 01:00:09.620747 | orchestrator | 2025-09-08 01:00:09.620754 | orchestrator | 2025-09-08 01:00:09.620762 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-08 01:00:09.620769 | orchestrator | 2025-09-08 01:00:09.620777 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-08 01:00:09.620785 | orchestrator | Monday 08 September 2025 00:58:47 +0000 (0:00:00.268) 0:00:00.268 ****** 2025-09-08 01:00:09.620792 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.620800 | orchestrator | 2025-09-08 01:00:09.620807 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-08 01:00:09.620819 | orchestrator | Monday 08 September 2025 00:58:49 +0000 (0:00:01.746) 0:00:02.015 ****** 2025-09-08 01:00:09.620826 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.620833 | orchestrator | 2025-09-08 01:00:09.620840 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-08 01:00:09.620847 | orchestrator | Monday 08 September 2025 00:58:50 +0000 (0:00:01.121) 0:00:03.136 ****** 2025-09-08 01:00:09.620854 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.620861 | orchestrator | 2025-09-08 01:00:09.620869 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-08 01:00:09.620876 | orchestrator | Monday 08 September 2025 00:58:51 +0000 (0:00:01.280) 0:00:04.416 ****** 2025-09-08 01:00:09.620883 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.620890 | orchestrator | 2025-09-08 01:00:09.620898 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-08 01:00:09.620905 | orchestrator | Monday 08 September 2025 00:58:53 +0000 (0:00:01.348) 0:00:05.764 ****** 2025-09-08 01:00:09.620912 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.620919 | orchestrator | 2025-09-08 01:00:09.620926 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-08 01:00:09.620933 | orchestrator | Monday 08 September 2025 00:58:54 +0000 (0:00:01.339) 0:00:07.104 ****** 2025-09-08 01:00:09.620941 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.620948 | orchestrator | 2025-09-08 01:00:09.620955 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-08 01:00:09.620962 | orchestrator | Monday 08 September 2025 00:58:55 +0000 (0:00:01.108) 0:00:08.212 ****** 2025-09-08 01:00:09.620969 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.620976 | orchestrator | 2025-09-08 01:00:09.620983 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-08 01:00:09.620991 | orchestrator | Monday 08 September 2025 00:58:57 +0000 (0:00:02.037) 0:00:10.250 ****** 2025-09-08 01:00:09.620998 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.621005 | orchestrator | 2025-09-08 01:00:09.621012 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-08 01:00:09.621019 | orchestrator | Monday 08 September 2025 00:58:58 +0000 (0:00:01.431) 0:00:11.681 ****** 2025-09-08 01:00:09.621026 | orchestrator | changed: [testbed-manager] 2025-09-08 01:00:09.621033 | orchestrator | 2025-09-08 01:00:09.621041 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-08 01:00:09.621048 | orchestrator | Monday 08 September 2025 00:59:43 +0000 (0:00:45.015) 0:00:56.697 ****** 2025-09-08 01:00:09.621055 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:00:09.621062 | orchestrator | 2025-09-08 01:00:09.621069 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-08 01:00:09.621076 | orchestrator | 2025-09-08 01:00:09.621085 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-08 01:00:09.621094 | orchestrator | Monday 08 September 2025 00:59:44 +0000 (0:00:00.144) 0:00:56.841 ****** 2025-09-08 01:00:09.621104 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:00:09.621113 | orchestrator | 2025-09-08 01:00:09.621122 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-08 01:00:09.621131 | orchestrator | 2025-09-08 01:00:09.621140 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-08 01:00:09.621148 | orchestrator | Monday 08 September 2025 00:59:45 +0000 (0:00:01.620) 0:00:58.462 ****** 2025-09-08 01:00:09.621158 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:00:09.621166 | orchestrator | 2025-09-08 01:00:09.621175 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-08 01:00:09.621184 | orchestrator | 2025-09-08 01:00:09.621193 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-08 01:00:09.621201 | orchestrator | Monday 08 September 2025 00:59:57 +0000 (0:00:11.315) 0:01:09.777 ****** 2025-09-08 01:00:09.621217 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:00:09.621227 | orchestrator | 2025-09-08 01:00:09.621240 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:00:09.621249 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-08 01:00:09.621258 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.621267 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.621276 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:00:09.621285 | orchestrator | 2025-09-08 01:00:09.621293 | orchestrator | 2025-09-08 01:00:09.621303 | orchestrator | 2025-09-08 01:00:09.621311 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:00:09.621320 | orchestrator | Monday 08 September 2025 01:00:08 +0000 (0:00:11.217) 0:01:20.994 ****** 2025-09-08 01:00:09.621329 | orchestrator | =============================================================================== 2025-09-08 01:00:09.621338 | orchestrator | Create admin user ------------------------------------------------------ 45.02s 2025-09-08 01:00:09.621346 | orchestrator | Restart ceph manager service ------------------------------------------- 24.15s 2025-09-08 01:00:09.621355 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2025-09-08 01:00:09.621363 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.75s 2025-09-08 01:00:09.621372 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.43s 2025-09-08 01:00:09.621394 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.35s 2025-09-08 01:00:09.621403 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.34s 2025-09-08 01:00:09.621412 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.28s 2025-09-08 01:00:09.621422 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.12s 2025-09-08 01:00:09.621431 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2025-09-08 01:00:09.621440 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-09-08 01:00:09.621448 | orchestrator | 2025-09-08 01:00:09 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:09.621676 | orchestrator | 2025-09-08 01:00:09 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:09.622243 | orchestrator | 2025-09-08 01:00:09 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:09.622334 | orchestrator | 2025-09-08 01:00:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:12.644643 | orchestrator | 2025-09-08 01:00:12 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:12.645179 | orchestrator | 2025-09-08 01:00:12 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:12.646003 | orchestrator | 2025-09-08 01:00:12 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:12.646596 | orchestrator | 2025-09-08 01:00:12 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:12.646614 | orchestrator | 2025-09-08 01:00:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:15.674004 | orchestrator | 2025-09-08 01:00:15 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:15.674317 | orchestrator | 2025-09-08 01:00:15 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:15.675357 | orchestrator | 2025-09-08 01:00:15 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:15.676170 | orchestrator | 2025-09-08 01:00:15 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:15.676196 | orchestrator | 2025-09-08 01:00:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:18.731195 | orchestrator | 2025-09-08 01:00:18 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:18.742923 | orchestrator | 2025-09-08 01:00:18 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:18.748898 | orchestrator | 2025-09-08 01:00:18 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:18.754957 | orchestrator | 2025-09-08 01:00:18 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:18.754989 | orchestrator | 2025-09-08 01:00:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:21.787562 | orchestrator | 2025-09-08 01:00:21 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:21.788118 | orchestrator | 2025-09-08 01:00:21 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:21.788848 | orchestrator | 2025-09-08 01:00:21 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:21.789618 | orchestrator | 2025-09-08 01:00:21 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:21.789735 | orchestrator | 2025-09-08 01:00:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:24.827901 | orchestrator | 2025-09-08 01:00:24 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:24.828019 | orchestrator | 2025-09-08 01:00:24 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:24.828495 | orchestrator | 2025-09-08 01:00:24 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:24.829315 | orchestrator | 2025-09-08 01:00:24 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:24.829338 | orchestrator | 2025-09-08 01:00:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:27.848949 | orchestrator | 2025-09-08 01:00:27 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:27.849078 | orchestrator | 2025-09-08 01:00:27 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:27.849712 | orchestrator | 2025-09-08 01:00:27 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:27.850525 | orchestrator | 2025-09-08 01:00:27 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:27.850549 | orchestrator | 2025-09-08 01:00:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:30.881103 | orchestrator | 2025-09-08 01:00:30 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:30.881229 | orchestrator | 2025-09-08 01:00:30 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:30.881243 | orchestrator | 2025-09-08 01:00:30 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:30.881254 | orchestrator | 2025-09-08 01:00:30 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:30.881264 | orchestrator | 2025-09-08 01:00:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:33.912673 | orchestrator | 2025-09-08 01:00:33 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:33.912916 | orchestrator | 2025-09-08 01:00:33 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:33.913711 | orchestrator | 2025-09-08 01:00:33 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:33.916596 | orchestrator | 2025-09-08 01:00:33 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:33.916619 | orchestrator | 2025-09-08 01:00:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:36.952631 | orchestrator | 2025-09-08 01:00:36 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:36.953427 | orchestrator | 2025-09-08 01:00:36 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:36.953992 | orchestrator | 2025-09-08 01:00:36 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:36.954740 | orchestrator | 2025-09-08 01:00:36 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:36.955171 | orchestrator | 2025-09-08 01:00:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:39.991610 | orchestrator | 2025-09-08 01:00:39 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:39.994848 | orchestrator | 2025-09-08 01:00:39 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:39.997276 | orchestrator | 2025-09-08 01:00:39 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:39.999428 | orchestrator | 2025-09-08 01:00:39 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:39.999465 | orchestrator | 2025-09-08 01:00:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:43.046855 | orchestrator | 2025-09-08 01:00:43 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:43.048772 | orchestrator | 2025-09-08 01:00:43 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:43.050277 | orchestrator | 2025-09-08 01:00:43 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:43.052703 | orchestrator | 2025-09-08 01:00:43 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:43.053967 | orchestrator | 2025-09-08 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:46.108771 | orchestrator | 2025-09-08 01:00:46 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:46.112901 | orchestrator | 2025-09-08 01:00:46 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:46.114318 | orchestrator | 2025-09-08 01:00:46 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:46.116807 | orchestrator | 2025-09-08 01:00:46 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:46.116831 | orchestrator | 2025-09-08 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:49.161301 | orchestrator | 2025-09-08 01:00:49 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:49.162738 | orchestrator | 2025-09-08 01:00:49 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:49.164362 | orchestrator | 2025-09-08 01:00:49 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:49.165771 | orchestrator | 2025-09-08 01:00:49 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:49.165817 | orchestrator | 2025-09-08 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:52.219548 | orchestrator | 2025-09-08 01:00:52 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:52.222582 | orchestrator | 2025-09-08 01:00:52 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:52.225120 | orchestrator | 2025-09-08 01:00:52 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:52.228207 | orchestrator | 2025-09-08 01:00:52 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:52.228230 | orchestrator | 2025-09-08 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:55.262980 | orchestrator | 2025-09-08 01:00:55 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:55.264816 | orchestrator | 2025-09-08 01:00:55 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:55.266696 | orchestrator | 2025-09-08 01:00:55 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:55.269566 | orchestrator | 2025-09-08 01:00:55 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:55.269722 | orchestrator | 2025-09-08 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:00:58.312947 | orchestrator | 2025-09-08 01:00:58 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:00:58.313869 | orchestrator | 2025-09-08 01:00:58 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:00:58.315474 | orchestrator | 2025-09-08 01:00:58 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:00:58.316752 | orchestrator | 2025-09-08 01:00:58 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:00:58.316776 | orchestrator | 2025-09-08 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:01.389471 | orchestrator | 2025-09-08 01:01:01 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:01.390127 | orchestrator | 2025-09-08 01:01:01 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:01.390746 | orchestrator | 2025-09-08 01:01:01 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:01.391469 | orchestrator | 2025-09-08 01:01:01 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:01.391487 | orchestrator | 2025-09-08 01:01:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:04.432359 | orchestrator | 2025-09-08 01:01:04 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:04.433852 | orchestrator | 2025-09-08 01:01:04 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:04.435608 | orchestrator | 2025-09-08 01:01:04 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:04.436793 | orchestrator | 2025-09-08 01:01:04 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:04.436816 | orchestrator | 2025-09-08 01:01:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:07.483635 | orchestrator | 2025-09-08 01:01:07 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:07.486464 | orchestrator | 2025-09-08 01:01:07 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:07.488031 | orchestrator | 2025-09-08 01:01:07 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:07.488846 | orchestrator | 2025-09-08 01:01:07 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:07.488878 | orchestrator | 2025-09-08 01:01:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:10.539971 | orchestrator | 2025-09-08 01:01:10 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:10.541233 | orchestrator | 2025-09-08 01:01:10 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:10.542615 | orchestrator | 2025-09-08 01:01:10 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:10.544295 | orchestrator | 2025-09-08 01:01:10 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:10.544336 | orchestrator | 2025-09-08 01:01:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:13.584439 | orchestrator | 2025-09-08 01:01:13 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:13.585460 | orchestrator | 2025-09-08 01:01:13 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:13.587741 | orchestrator | 2025-09-08 01:01:13 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:13.588085 | orchestrator | 2025-09-08 01:01:13 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:13.588107 | orchestrator | 2025-09-08 01:01:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:16.630768 | orchestrator | 2025-09-08 01:01:16 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:16.631883 | orchestrator | 2025-09-08 01:01:16 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:16.632679 | orchestrator | 2025-09-08 01:01:16 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:16.634422 | orchestrator | 2025-09-08 01:01:16 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:16.634517 | orchestrator | 2025-09-08 01:01:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:19.680656 | orchestrator | 2025-09-08 01:01:19 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:19.681800 | orchestrator | 2025-09-08 01:01:19 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:19.684147 | orchestrator | 2025-09-08 01:01:19 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:19.685872 | orchestrator | 2025-09-08 01:01:19 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:19.686331 | orchestrator | 2025-09-08 01:01:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:22.729996 | orchestrator | 2025-09-08 01:01:22 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:22.731527 | orchestrator | 2025-09-08 01:01:22 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:22.731762 | orchestrator | 2025-09-08 01:01:22 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:22.733161 | orchestrator | 2025-09-08 01:01:22 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:22.733181 | orchestrator | 2025-09-08 01:01:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:25.779730 | orchestrator | 2025-09-08 01:01:25 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:25.781961 | orchestrator | 2025-09-08 01:01:25 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:25.783404 | orchestrator | 2025-09-08 01:01:25 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:25.785103 | orchestrator | 2025-09-08 01:01:25 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:25.785502 | orchestrator | 2025-09-08 01:01:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:28.807256 | orchestrator | 2025-09-08 01:01:28 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:28.807894 | orchestrator | 2025-09-08 01:01:28 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:28.808544 | orchestrator | 2025-09-08 01:01:28 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:28.809265 | orchestrator | 2025-09-08 01:01:28 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:28.809443 | orchestrator | 2025-09-08 01:01:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:31.839777 | orchestrator | 2025-09-08 01:01:31 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:31.840581 | orchestrator | 2025-09-08 01:01:31 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:31.841042 | orchestrator | 2025-09-08 01:01:31 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:31.842361 | orchestrator | 2025-09-08 01:01:31 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:31.842409 | orchestrator | 2025-09-08 01:01:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:34.875524 | orchestrator | 2025-09-08 01:01:34 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:34.877070 | orchestrator | 2025-09-08 01:01:34 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:34.878323 | orchestrator | 2025-09-08 01:01:34 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:34.878815 | orchestrator | 2025-09-08 01:01:34 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:34.878849 | orchestrator | 2025-09-08 01:01:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:37.907555 | orchestrator | 2025-09-08 01:01:37 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:37.908543 | orchestrator | 2025-09-08 01:01:37 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:37.909532 | orchestrator | 2025-09-08 01:01:37 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:37.910823 | orchestrator | 2025-09-08 01:01:37 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:37.910847 | orchestrator | 2025-09-08 01:01:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:40.934636 | orchestrator | 2025-09-08 01:01:40 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:40.940233 | orchestrator | 2025-09-08 01:01:40 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:40.940268 | orchestrator | 2025-09-08 01:01:40 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:40.940281 | orchestrator | 2025-09-08 01:01:40 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:40.940325 | orchestrator | 2025-09-08 01:01:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:43.971243 | orchestrator | 2025-09-08 01:01:43 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:43.973422 | orchestrator | 2025-09-08 01:01:43 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:43.976165 | orchestrator | 2025-09-08 01:01:43 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:43.978615 | orchestrator | 2025-09-08 01:01:43 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:43.978864 | orchestrator | 2025-09-08 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:47.015135 | orchestrator | 2025-09-08 01:01:47 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:47.015663 | orchestrator | 2025-09-08 01:01:47 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:47.016689 | orchestrator | 2025-09-08 01:01:47 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:47.017712 | orchestrator | 2025-09-08 01:01:47 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:47.017733 | orchestrator | 2025-09-08 01:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:50.054359 | orchestrator | 2025-09-08 01:01:50 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:50.055918 | orchestrator | 2025-09-08 01:01:50 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:50.060812 | orchestrator | 2025-09-08 01:01:50 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:50.063230 | orchestrator | 2025-09-08 01:01:50 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:50.063782 | orchestrator | 2025-09-08 01:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:53.118205 | orchestrator | 2025-09-08 01:01:53 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:53.120668 | orchestrator | 2025-09-08 01:01:53 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:53.123510 | orchestrator | 2025-09-08 01:01:53 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:53.124915 | orchestrator | 2025-09-08 01:01:53 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:53.124937 | orchestrator | 2025-09-08 01:01:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:56.182633 | orchestrator | 2025-09-08 01:01:56 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:56.184068 | orchestrator | 2025-09-08 01:01:56 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:56.186540 | orchestrator | 2025-09-08 01:01:56 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state STARTED 2025-09-08 01:01:56.189005 | orchestrator | 2025-09-08 01:01:56 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:56.189046 | orchestrator | 2025-09-08 01:01:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:01:59.226731 | orchestrator | 2025-09-08 01:01:59 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:01:59.227219 | orchestrator | 2025-09-08 01:01:59 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:01:59.229974 | orchestrator | 2025-09-08 01:01:59 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:01:59.233619 | orchestrator | 2025-09-08 01:01:59 | INFO  | Task 40816d79-92dc-44d8-887c-173a1353d0c8 is in state SUCCESS 2025-09-08 01:01:59.235920 | orchestrator | 2025-09-08 01:01:59.235955 | orchestrator | 2025-09-08 01:01:59.235968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:01:59.235982 | orchestrator | 2025-09-08 01:01:59.235994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:01:59.236007 | orchestrator | Monday 08 September 2025 00:58:55 +0000 (0:00:00.327) 0:00:00.327 ****** 2025-09-08 01:01:59.236059 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:01:59.236072 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:01:59.236083 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:01:59.236094 | orchestrator | 2025-09-08 01:01:59.236105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:01:59.236116 | orchestrator | Monday 08 September 2025 00:58:56 +0000 (0:00:00.411) 0:00:00.739 ****** 2025-09-08 01:01:59.236127 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-08 01:01:59.236139 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-08 01:01:59.236150 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-08 01:01:59.236189 | orchestrator | 2025-09-08 01:01:59.236202 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-08 01:01:59.236213 | orchestrator | 2025-09-08 01:01:59.236224 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:01:59.236235 | orchestrator | Monday 08 September 2025 00:58:56 +0000 (0:00:00.435) 0:00:01.174 ****** 2025-09-08 01:01:59.236246 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:01:59.236258 | orchestrator | 2025-09-08 01:01:59.236269 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-08 01:01:59.236280 | orchestrator | Monday 08 September 2025 00:58:57 +0000 (0:00:00.624) 0:00:01.799 ****** 2025-09-08 01:01:59.236291 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-08 01:01:59.236302 | orchestrator | 2025-09-08 01:01:59.236312 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-08 01:01:59.236323 | orchestrator | Monday 08 September 2025 00:59:00 +0000 (0:00:03.410) 0:00:05.209 ****** 2025-09-08 01:01:59.236334 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-08 01:01:59.236363 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-08 01:01:59.236375 | orchestrator | 2025-09-08 01:01:59.236423 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-08 01:01:59.236434 | orchestrator | Monday 08 September 2025 00:59:07 +0000 (0:00:06.412) 0:00:11.622 ****** 2025-09-08 01:01:59.236445 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-08 01:01:59.236455 | orchestrator | 2025-09-08 01:01:59.236466 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-08 01:01:59.236477 | orchestrator | Monday 08 September 2025 00:59:10 +0000 (0:00:03.559) 0:00:15.181 ****** 2025-09-08 01:01:59.236488 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:01:59.236500 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-08 01:01:59.236511 | orchestrator | 2025-09-08 01:01:59.236522 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-08 01:01:59.236532 | orchestrator | Monday 08 September 2025 00:59:14 +0000 (0:00:03.476) 0:00:18.658 ****** 2025-09-08 01:01:59.236543 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:01:59.236554 | orchestrator | 2025-09-08 01:01:59.236565 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-08 01:01:59.236576 | orchestrator | Monday 08 September 2025 00:59:17 +0000 (0:00:03.640) 0:00:22.298 ****** 2025-09-08 01:01:59.236602 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-08 01:01:59.236613 | orchestrator | 2025-09-08 01:01:59.236623 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-08 01:01:59.236634 | orchestrator | Monday 08 September 2025 00:59:21 +0000 (0:00:04.044) 0:00:26.343 ****** 2025-09-08 01:01:59.236667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.236692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.236706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.236727 | orchestrator | 2025-09-08 01:01:59.236738 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:01:59.236749 | orchestrator | Monday 08 September 2025 00:59:30 +0000 (0:00:08.177) 0:00:34.520 ****** 2025-09-08 01:01:59.236767 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:01:59.236779 | orchestrator | 2025-09-08 01:01:59.236790 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-08 01:01:59.236801 | orchestrator | Monday 08 September 2025 00:59:30 +0000 (0:00:00.686) 0:00:35.207 ****** 2025-09-08 01:01:59.236812 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:01:59.236823 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.236833 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:01:59.236844 | orchestrator | 2025-09-08 01:01:59.236855 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-08 01:01:59.236866 | orchestrator | Monday 08 September 2025 00:59:34 +0000 (0:00:03.572) 0:00:38.779 ****** 2025-09-08 01:01:59.236877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:01:59.236888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:01:59.236899 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:01:59.236910 | orchestrator | 2025-09-08 01:01:59.236920 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-08 01:01:59.236931 | orchestrator | Monday 08 September 2025 00:59:36 +0000 (0:00:01.885) 0:00:40.665 ****** 2025-09-08 01:01:59.236942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:01:59.236953 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:01:59.236964 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:01:59.236974 | orchestrator | 2025-09-08 01:01:59.236985 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-08 01:01:59.236996 | orchestrator | Monday 08 September 2025 00:59:37 +0000 (0:00:01.109) 0:00:41.775 ****** 2025-09-08 01:01:59.237007 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:01:59.237026 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:01:59.237037 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:01:59.237048 | orchestrator | 2025-09-08 01:01:59.237064 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-08 01:01:59.237075 | orchestrator | Monday 08 September 2025 00:59:38 +0000 (0:00:00.799) 0:00:42.574 ****** 2025-09-08 01:01:59.237086 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.237097 | orchestrator | 2025-09-08 01:01:59.237107 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-08 01:01:59.237118 | orchestrator | Monday 08 September 2025 00:59:38 +0000 (0:00:00.109) 0:00:42.683 ****** 2025-09-08 01:01:59.237129 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.237140 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.237151 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.237162 | orchestrator | 2025-09-08 01:01:59.237173 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:01:59.237184 | orchestrator | Monday 08 September 2025 00:59:38 +0000 (0:00:00.345) 0:00:43.028 ****** 2025-09-08 01:01:59.237195 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:01:59.237205 | orchestrator | 2025-09-08 01:01:59.237216 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-08 01:01:59.237227 | orchestrator | Monday 08 September 2025 00:59:39 +0000 (0:00:00.566) 0:00:43.595 ****** 2025-09-08 01:01:59.237245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.237264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.237285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.237297 | orchestrator | 2025-09-08 01:01:59.237308 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-08 01:01:59.237319 | orchestrator | Monday 08 September 2025 00:59:43 +0000 (0:00:04.684) 0:00:48.279 ****** 2025-09-08 01:01:59.237340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:01:59.237360 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.237393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:01:59.237406 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.237427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:01:59.237449 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.237460 | orchestrator | 2025-09-08 01:01:59.237471 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-08 01:01:59.237482 | orchestrator | Monday 08 September 2025 00:59:48 +0000 (0:00:04.810) 0:00:53.089 ****** 2025-09-08 01:01:59.237499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:01:59.237511 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.237529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:01:59.237541 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.237572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-08 01:01:59.237585 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.237596 | orchestrator | 2025-09-08 01:01:59.237607 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-08 01:01:59.237618 | orchestrator | Monday 08 September 2025 00:59:53 +0000 (0:00:04.754) 0:00:57.844 ****** 2025-09-08 01:01:59.237629 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.237639 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.237650 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.237661 | orchestrator | 2025-09-08 01:01:59.237672 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-08 01:01:59.237683 | orchestrator | Monday 08 September 2025 00:59:57 +0000 (0:00:03.734) 0:01:01.579 ****** 2025-09-08 01:01:59.237702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.237726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.237740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.237752 | orchestrator | 2025-09-08 01:01:59.237763 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-08 01:01:59.237774 | orchestrator | Monday 08 September 2025 01:00:02 +0000 (0:00:05.236) 0:01:06.816 ****** 2025-09-08 01:01:59.237785 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:01:59.237796 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:01:59.237814 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.237825 | orchestrator | 2025-09-08 01:01:59.237836 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-08 01:01:59.238114 | orchestrator | Monday 08 September 2025 01:00:10 +0000 (0:00:07.734) 0:01:14.550 ****** 2025-09-08 01:01:59.238136 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.238148 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.238160 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.238171 | orchestrator | 2025-09-08 01:01:59.238182 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-08 01:01:59.238194 | orchestrator | Monday 08 September 2025 01:00:16 +0000 (0:00:06.257) 0:01:20.808 ****** 2025-09-08 01:01:59.238205 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.238217 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.238228 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.238239 | orchestrator | 2025-09-08 01:01:59.238250 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-08 01:01:59.238262 | orchestrator | Monday 08 September 2025 01:00:21 +0000 (0:00:05.143) 0:01:25.951 ****** 2025-09-08 01:01:59.238273 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.238284 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.238296 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.238307 | orchestrator | 2025-09-08 01:01:59.238318 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-08 01:01:59.238330 | orchestrator | Monday 08 September 2025 01:00:26 +0000 (0:00:04.767) 0:01:30.719 ****** 2025-09-08 01:01:59.238341 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.238352 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.238363 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.238375 | orchestrator | 2025-09-08 01:01:59.238436 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-08 01:01:59.238448 | orchestrator | Monday 08 September 2025 01:00:29 +0000 (0:00:02.903) 0:01:33.622 ****** 2025-09-08 01:01:59.238459 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.238470 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.238481 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.238492 | orchestrator | 2025-09-08 01:01:59.238503 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-08 01:01:59.238514 | orchestrator | Monday 08 September 2025 01:00:29 +0000 (0:00:00.542) 0:01:34.165 ****** 2025-09-08 01:01:59.238551 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-08 01:01:59.238563 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.238582 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-08 01:01:59.238594 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.238605 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-08 01:01:59.238616 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.238626 | orchestrator | 2025-09-08 01:01:59.238637 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-08 01:01:59.238648 | orchestrator | Monday 08 September 2025 01:00:33 +0000 (0:00:03.726) 0:01:37.891 ****** 2025-09-08 01:01:59.238664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.238701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.238722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-08 01:01:59.238744 | orchestrator | 2025-09-08 01:01:59.238757 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-08 01:01:59.238770 | orchestrator | Monday 08 September 2025 01:00:36 +0000 (0:00:03.164) 0:01:41.055 ****** 2025-09-08 01:01:59.238782 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:01:59.238795 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:01:59.238808 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:01:59.238820 | orchestrator | 2025-09-08 01:01:59.238833 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-08 01:01:59.238847 | orchestrator | Monday 08 September 2025 01:00:36 +0000 (0:00:00.251) 0:01:41.307 ****** 2025-09-08 01:01:59.238860 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.238873 | orchestrator | 2025-09-08 01:01:59.238886 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-08 01:01:59.238899 | orchestrator | Monday 08 September 2025 01:00:38 +0000 (0:00:02.038) 0:01:43.345 ****** 2025-09-08 01:01:59.238912 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.238925 | orchestrator | 2025-09-08 01:01:59.238938 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-08 01:01:59.238951 | orchestrator | Monday 08 September 2025 01:00:41 +0000 (0:00:02.205) 0:01:45.550 ****** 2025-09-08 01:01:59.238964 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.238976 | orchestrator | 2025-09-08 01:01:59.238990 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-08 01:01:59.239008 | orchestrator | Monday 08 September 2025 01:00:43 +0000 (0:00:02.117) 0:01:47.668 ****** 2025-09-08 01:01:59.239019 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.239030 | orchestrator | 2025-09-08 01:01:59.239041 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-08 01:01:59.239052 | orchestrator | Monday 08 September 2025 01:01:10 +0000 (0:00:27.700) 0:02:15.369 ****** 2025-09-08 01:01:59.239063 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.239074 | orchestrator | 2025-09-08 01:01:59.239085 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-08 01:01:59.239095 | orchestrator | Monday 08 September 2025 01:01:12 +0000 (0:00:02.061) 0:02:17.431 ****** 2025-09-08 01:01:59.239106 | orchestrator | 2025-09-08 01:01:59.239117 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-08 01:01:59.239128 | orchestrator | Monday 08 September 2025 01:01:13 +0000 (0:00:00.275) 0:02:17.706 ****** 2025-09-08 01:01:59.239138 | orchestrator | 2025-09-08 01:01:59.239149 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-08 01:01:59.239160 | orchestrator | Monday 08 September 2025 01:01:13 +0000 (0:00:00.070) 0:02:17.776 ****** 2025-09-08 01:01:59.239171 | orchestrator | 2025-09-08 01:01:59.239182 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-08 01:01:59.239193 | orchestrator | Monday 08 September 2025 01:01:13 +0000 (0:00:00.136) 0:02:17.913 ****** 2025-09-08 01:01:59.239203 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:01:59.239214 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:01:59.239225 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:01:59.239236 | orchestrator | 2025-09-08 01:01:59.239247 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:01:59.239259 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:01:59.239273 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:01:59.239291 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:01:59.239302 | orchestrator | 2025-09-08 01:01:59.239313 | orchestrator | 2025-09-08 01:01:59.239329 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:01:59.239340 | orchestrator | Monday 08 September 2025 01:01:57 +0000 (0:00:44.140) 0:03:02.053 ****** 2025-09-08 01:01:59.239351 | orchestrator | =============================================================================== 2025-09-08 01:01:59.239362 | orchestrator | glance : Restart glance-api container ---------------------------------- 44.14s 2025-09-08 01:01:59.239372 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.70s 2025-09-08 01:01:59.239403 | orchestrator | glance : Ensuring config directories exist ------------------------------ 8.18s 2025-09-08 01:01:59.239414 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.73s 2025-09-08 01:01:59.239425 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.41s 2025-09-08 01:01:59.239435 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.26s 2025-09-08 01:01:59.239446 | orchestrator | glance : Copying over config.json files for services -------------------- 5.24s 2025-09-08 01:01:59.239457 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.14s 2025-09-08 01:01:59.239468 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.81s 2025-09-08 01:01:59.239479 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.77s 2025-09-08 01:01:59.239490 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.75s 2025-09-08 01:01:59.239501 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.68s 2025-09-08 01:01:59.239511 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.04s 2025-09-08 01:01:59.239522 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.73s 2025-09-08 01:01:59.239533 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.73s 2025-09-08 01:01:59.239544 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.64s 2025-09-08 01:01:59.239555 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.57s 2025-09-08 01:01:59.239566 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.56s 2025-09-08 01:01:59.239577 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.48s 2025-09-08 01:01:59.239588 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.41s 2025-09-08 01:01:59.239599 | orchestrator | 2025-09-08 01:01:59 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:01:59.239610 | orchestrator | 2025-09-08 01:01:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:02.300221 | orchestrator | 2025-09-08 01:02:02 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:02.302118 | orchestrator | 2025-09-08 01:02:02 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:02.303681 | orchestrator | 2025-09-08 01:02:02 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:02:02.305580 | orchestrator | 2025-09-08 01:02:02 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:02.305608 | orchestrator | 2025-09-08 01:02:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:05.362945 | orchestrator | 2025-09-08 01:02:05 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:05.363073 | orchestrator | 2025-09-08 01:02:05 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:05.363121 | orchestrator | 2025-09-08 01:02:05 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:02:05.363134 | orchestrator | 2025-09-08 01:02:05 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:05.363145 | orchestrator | 2025-09-08 01:02:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:08.399484 | orchestrator | 2025-09-08 01:02:08 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:08.400533 | orchestrator | 2025-09-08 01:02:08 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:08.406630 | orchestrator | 2025-09-08 01:02:08 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:02:08.411607 | orchestrator | 2025-09-08 01:02:08 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:08.411639 | orchestrator | 2025-09-08 01:02:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:11.461979 | orchestrator | 2025-09-08 01:02:11 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:11.465436 | orchestrator | 2025-09-08 01:02:11 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:11.467460 | orchestrator | 2025-09-08 01:02:11 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:02:11.468511 | orchestrator | 2025-09-08 01:02:11 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:11.468993 | orchestrator | 2025-09-08 01:02:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:14.505911 | orchestrator | 2025-09-08 01:02:14 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:14.506643 | orchestrator | 2025-09-08 01:02:14 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:14.507441 | orchestrator | 2025-09-08 01:02:14 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:02:14.508714 | orchestrator | 2025-09-08 01:02:14 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:14.508898 | orchestrator | 2025-09-08 01:02:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:17.550827 | orchestrator | 2025-09-08 01:02:17 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:17.551969 | orchestrator | 2025-09-08 01:02:17 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:17.554416 | orchestrator | 2025-09-08 01:02:17 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:02:17.556261 | orchestrator | 2025-09-08 01:02:17 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:17.556697 | orchestrator | 2025-09-08 01:02:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:20.600940 | orchestrator | 2025-09-08 01:02:20 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:20.603252 | orchestrator | 2025-09-08 01:02:20 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:20.605599 | orchestrator | 2025-09-08 01:02:20 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state STARTED 2025-09-08 01:02:20.608088 | orchestrator | 2025-09-08 01:02:20 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:20.608121 | orchestrator | 2025-09-08 01:02:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:23.650104 | orchestrator | 2025-09-08 01:02:23 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:23.651020 | orchestrator | 2025-09-08 01:02:23 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:23.652311 | orchestrator | 2025-09-08 01:02:23 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:23.655086 | orchestrator | 2025-09-08 01:02:23 | INFO  | Task 86c86c36-8e5d-4ff8-9389-c0e12c1e9023 is in state SUCCESS 2025-09-08 01:02:23.655593 | orchestrator | 2025-09-08 01:02:23.657523 | orchestrator | 2025-09-08 01:02:23.657555 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:02:23.657568 | orchestrator | 2025-09-08 01:02:23.657579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:02:23.657591 | orchestrator | Monday 08 September 2025 00:58:47 +0000 (0:00:00.296) 0:00:00.296 ****** 2025-09-08 01:02:23.657602 | orchestrator | ok: [testbed-manager] 2025-09-08 01:02:23.657655 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:02:23.657668 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:02:23.657679 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:02:23.657690 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:02:23.657701 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:02:23.657712 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:02:23.657723 | orchestrator | 2025-09-08 01:02:23.657734 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:02:23.657745 | orchestrator | Monday 08 September 2025 00:58:48 +0000 (0:00:00.947) 0:00:01.244 ****** 2025-09-08 01:02:23.657757 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-08 01:02:23.657769 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-08 01:02:23.657780 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-08 01:02:23.657791 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-08 01:02:23.657802 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-08 01:02:23.657835 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-08 01:02:23.657848 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-08 01:02:23.657859 | orchestrator | 2025-09-08 01:02:23.657965 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-08 01:02:23.657978 | orchestrator | 2025-09-08 01:02:23.657989 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-08 01:02:23.658000 | orchestrator | Monday 08 September 2025 00:58:49 +0000 (0:00:00.768) 0:00:02.012 ****** 2025-09-08 01:02:23.658012 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:02:23.658108 | orchestrator | 2025-09-08 01:02:23.658122 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-08 01:02:23.658194 | orchestrator | Monday 08 September 2025 00:58:50 +0000 (0:00:01.744) 0:00:03.756 ****** 2025-09-08 01:02:23.658213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.658233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.658270 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:23.658287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658316 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.658330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.658363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.658377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.658475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.658508 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658689 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:23.658705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.658916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.658928 | orchestrator | 2025-09-08 01:02:23.658939 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-08 01:02:23.658950 | orchestrator | Monday 08 September 2025 00:58:55 +0000 (0:00:04.058) 0:00:07.815 ****** 2025-09-08 01:02:23.658975 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:02:23.658986 | orchestrator | 2025-09-08 01:02:23.658997 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-08 01:02:23.659008 | orchestrator | Monday 08 September 2025 00:58:56 +0000 (0:00:01.621) 0:00:09.437 ****** 2025-09-08 01:02:23.659020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.659032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.659043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.659062 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:23.659074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.659085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.659115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.659135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659184 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.659204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659352 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.659463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:23.659483 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.659541 | orchestrator | 2025-09-08 01:02:23.659553 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-08 01:02:23.659565 | orchestrator | Monday 08 September 2025 00:59:02 +0000 (0:00:06.311) 0:00:15.749 ****** 2025-09-08 01:02:23.659577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 01:02:23.659588 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.659601 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.659621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.659633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659669 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 01:02:23.659682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.659694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659705 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659717 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.659735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.659754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.659793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.659816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.659833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.659845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.659862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.659901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.659913 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.659924 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.659935 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.659946 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.659957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.659969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.659996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660008 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.660019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660058 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.660069 | orchestrator | 2025-09-08 01:02:23.660080 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-08 01:02:23.660092 | orchestrator | Monday 08 September 2025 00:59:04 +0000 (0:00:01.747) 0:00:17.497 ****** 2025-09-08 01:02:23.660103 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-08 01:02:23.660114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660132 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660194 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-08 01:02:23.660206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660218 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660316 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.660327 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.660338 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.660349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-08 01:02:23.660465 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.660482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660524 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.660535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660576 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.660588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-08 01:02:23.660604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-08 01:02:23.660627 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.660638 | orchestrator | 2025-09-08 01:02:23.660650 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-08 01:02:23.660661 | orchestrator | Monday 08 September 2025 00:59:06 +0000 (0:00:01.928) 0:00:19.425 ****** 2025-09-08 01:02:23.660678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.660691 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:23.660708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.660721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.660732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.660749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.660761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.660780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.660792 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.660803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.660821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.660832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.660844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.660860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.660872 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.660893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.660904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.660916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.660934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.660946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.660963 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:23.660983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.660994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.661006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.661493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.661513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.661525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.661543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.661567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.661579 | orchestrator | 2025-09-08 01:02:23.661590 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-08 01:02:23.661601 | orchestrator | Monday 08 September 2025 00:59:12 +0000 (0:00:05.851) 0:00:25.276 ****** 2025-09-08 01:02:23.661612 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:23.661624 | orchestrator | 2025-09-08 01:02:23.661634 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-08 01:02:23.661646 | orchestrator | Monday 08 September 2025 00:59:13 +0000 (0:00:00.947) 0:00:26.224 ****** 2025-09-08 01:02:23.661657 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096771, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.661669 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096771, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662002 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096771, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662049 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096771, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662063 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096794, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0078428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662091 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096794, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0078428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662103 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096771, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662114 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096771, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.662126 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096794, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0078428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662227 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096771, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662245 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096794, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0078428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662257 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096765, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662282 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096794, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0078428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662294 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096765, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662305 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096794, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0078428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662316 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096765, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662359 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096786, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.00588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662372 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096765, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662384 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096786, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.00588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662748 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096765, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662794 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096757, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9985752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662806 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096765, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662818 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096786, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.00588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.662981 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096794, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0078428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.662998 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096757, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9985752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663020 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096786, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.00588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663039 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096757, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9985752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663051 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096757, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9985752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663063 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096773, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663074 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096773, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663124 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096786, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.00588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663138 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096757, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9985752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663157 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096786, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.00588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663174 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096784, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663186 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096773, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663198 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096773, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663209 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096773, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663253 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096784, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663266 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096784, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663285 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096776, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0040145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663301 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096776, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0040145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663313 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096757, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9985752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663325 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096784, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663336 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096784, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663378 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096776, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0040145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663435 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096769, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663456 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096765, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.663474 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096793, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663486 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096773, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663497 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096776, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0040145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663509 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096769, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663556 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096769, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663583 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.998116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663595 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096776, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0040145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663612 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096784, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663624 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096813, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663635 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096776, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0040145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663647 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096793, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663658 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096769, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663709 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096793, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663722 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096791, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663740 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096769, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663752 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.998116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663763 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096769, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663775 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096786, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.00588, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.663787 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.998116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663838 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096793, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663851 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096761, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.999195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663868 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096793, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663880 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096813, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663892 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096813, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663903 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096793, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663922 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.998116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663968 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096791, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663981 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096755, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9982991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.663999 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.998116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664011 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.998116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664022 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096791, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664034 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096813, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664052 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096813, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664070 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096757, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9985752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664082 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096781, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664099 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096761, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.999195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664112 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096791, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096791, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664135 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096813, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664154 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096761, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.999195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664180 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096755, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9982991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664192 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096778, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0043025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664208 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096761, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.999195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664220 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096761, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.999195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664232 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096791, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664250 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096755, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9982991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664261 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096811, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664274 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.664295 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096781, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664307 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096773, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.003459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664323 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096755, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9982991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664335 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096755, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9982991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664347 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096781, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664365 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096781, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664376 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096761, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.999195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664417 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096778, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0043025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664430 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096778, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0043025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664441 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096811, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664461 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.664474 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096778, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0043025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664486 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096781, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664505 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096755, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9982991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664516 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096811, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664528 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.664546 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096778, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0043025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664558 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096781, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664570 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096811, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664581 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.664633 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096811, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664653 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.664664 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096778, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0043025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664676 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096784, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664688 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096811, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-08 01:02:23.664699 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.664718 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096776, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0040145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664730 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096769, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0014105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664747 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096793, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664759 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096752, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.998116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664777 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096813, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664790 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096791, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0070357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096761, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.999195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664821 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096755, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9982991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664833 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096781, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0049036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664849 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096778, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0043025, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664867 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096811, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290575.0139313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-08 01:02:23.664879 | orchestrator | 2025-09-08 01:02:23.664892 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-08 01:02:23.664904 | orchestrator | Monday 08 September 2025 00:59:44 +0000 (0:00:31.339) 0:00:57.563 ****** 2025-09-08 01:02:23.664916 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:23.664927 | orchestrator | 2025-09-08 01:02:23.664938 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-08 01:02:23.664949 | orchestrator | Monday 08 September 2025 00:59:45 +0000 (0:00:00.713) 0:00:58.277 ****** 2025-09-08 01:02:23.664960 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.664971 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.664982 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:23.664993 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665004 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-08 01:02:23.665015 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:02:23.665026 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.665037 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665048 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:23.665059 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665069 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-08 01:02:23.665080 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:23.665091 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.665102 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665112 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:23.665123 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665134 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-08 01:02:23.665145 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.665156 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665166 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:23.665177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665188 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-08 01:02:23.665198 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.665215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665226 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:23.665237 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665248 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-08 01:02:23.665259 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.665270 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665281 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:23.665291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665302 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-08 01:02:23.665320 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.665331 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665341 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-08 01:02:23.665352 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-08 01:02:23.665363 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-08 01:02:23.665374 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-08 01:02:23.665385 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:02:23.665413 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-08 01:02:23.665424 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 01:02:23.665435 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 01:02:23.665446 | orchestrator | 2025-09-08 01:02:23.665457 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-08 01:02:23.665468 | orchestrator | Monday 08 September 2025 00:59:48 +0000 (0:00:03.346) 0:01:01.623 ****** 2025-09-08 01:02:23.665479 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:23.665490 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.665501 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:23.665512 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.665528 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:23.665539 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.665550 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:23.665561 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.665572 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:23.665583 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.665594 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-08 01:02:23.665605 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.665616 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-08 01:02:23.665627 | orchestrator | 2025-09-08 01:02:23.665638 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-08 01:02:23.665649 | orchestrator | Monday 08 September 2025 01:00:09 +0000 (0:00:20.897) 0:01:22.521 ****** 2025-09-08 01:02:23.665660 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:23.665671 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.665682 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:23.665693 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.665704 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:23.665715 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.665726 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:23.665736 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.665747 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:23.665758 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.665769 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-08 01:02:23.665780 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.665791 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-08 01:02:23.665802 | orchestrator | 2025-09-08 01:02:23.665820 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-08 01:02:23.665831 | orchestrator | Monday 08 September 2025 01:00:14 +0000 (0:00:05.209) 0:01:27.730 ****** 2025-09-08 01:02:23.665843 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:23.665854 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:23.665866 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-08 01:02:23.665877 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.665894 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:23.665905 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.665916 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.665927 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:23.665938 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.665949 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:23.665960 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.665971 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-08 01:02:23.665982 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.665993 | orchestrator | 2025-09-08 01:02:23.666004 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-08 01:02:23.666084 | orchestrator | Monday 08 September 2025 01:00:17 +0000 (0:00:02.568) 0:01:30.298 ****** 2025-09-08 01:02:23.666100 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:23.666111 | orchestrator | 2025-09-08 01:02:23.666122 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-08 01:02:23.666133 | orchestrator | Monday 08 September 2025 01:00:18 +0000 (0:00:00.729) 0:01:31.028 ****** 2025-09-08 01:02:23.666144 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.666155 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.666165 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.666176 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.666187 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.666198 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.666208 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.666219 | orchestrator | 2025-09-08 01:02:23.666230 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-08 01:02:23.666241 | orchestrator | Monday 08 September 2025 01:00:18 +0000 (0:00:00.746) 0:01:31.774 ****** 2025-09-08 01:02:23.666251 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.666262 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.666273 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.666284 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.666300 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:23.666311 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:23.666322 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:23.666333 | orchestrator | 2025-09-08 01:02:23.666344 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-08 01:02:23.666355 | orchestrator | Monday 08 September 2025 01:00:22 +0000 (0:00:03.092) 0:01:34.866 ****** 2025-09-08 01:02:23.666366 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:23.666377 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.666435 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:23.666458 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:23.666469 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:23.666479 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:23.666490 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.666501 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.666511 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.666522 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.666533 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:23.666543 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.666554 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-08 01:02:23.666565 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.666575 | orchestrator | 2025-09-08 01:02:23.666586 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-08 01:02:23.666597 | orchestrator | Monday 08 September 2025 01:00:24 +0000 (0:00:02.392) 0:01:37.259 ****** 2025-09-08 01:02:23.666607 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:23.666619 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:23.666630 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-08 01:02:23.666640 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.666651 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.666662 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:23.666672 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.666683 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:23.666694 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.666705 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:23.666715 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.666726 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-08 01:02:23.666737 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.666748 | orchestrator | 2025-09-08 01:02:23.666766 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-08 01:02:23.666777 | orchestrator | Monday 08 September 2025 01:00:26 +0000 (0:00:01.775) 0:01:39.035 ****** 2025-09-08 01:02:23.666788 | orchestrator | [WARNING]: Skipped 2025-09-08 01:02:23.666799 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-08 01:02:23.666810 | orchestrator | due to this access issue: 2025-09-08 01:02:23.666821 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-08 01:02:23.666832 | orchestrator | not a directory 2025-09-08 01:02:23.666843 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-08 01:02:23.666853 | orchestrator | 2025-09-08 01:02:23.666864 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-08 01:02:23.666875 | orchestrator | Monday 08 September 2025 01:00:27 +0000 (0:00:01.171) 0:01:40.206 ****** 2025-09-08 01:02:23.666886 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.666896 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.666907 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.666918 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.666928 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.666946 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.666957 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.666968 | orchestrator | 2025-09-08 01:02:23.666978 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-08 01:02:23.666989 | orchestrator | Monday 08 September 2025 01:00:28 +0000 (0:00:00.784) 0:01:40.991 ****** 2025-09-08 01:02:23.667000 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.667011 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:23.667021 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:23.667032 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:23.667042 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:23.667053 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:23.667063 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:23.667074 | orchestrator | 2025-09-08 01:02:23.667085 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-08 01:02:23.667096 | orchestrator | Monday 08 September 2025 01:00:29 +0000 (0:00:00.847) 0:01:41.839 ****** 2025-09-08 01:02:23.667114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.667127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.667139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.667150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.667162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.667182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.667207 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-08 01:02:23.667225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667291 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-08 01:02:23.667309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667384 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-08 01:02:23.667479 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-08 01:02:23.667492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667517 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-08 01:02:23.667553 | orchestrator | 2025-09-08 01:02:23.667564 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-08 01:02:23.667575 | orchestrator | Monday 08 September 2025 01:00:33 +0000 (0:00:04.098) 0:01:45.938 ****** 2025-09-08 01:02:23.667586 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-08 01:02:23.667597 | orchestrator | skipping: [testbed-manager] 2025-09-08 01:02:23.667608 | orchestrator | 2025-09-08 01:02:23.667619 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:23.667635 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:01.042) 0:01:46.980 ****** 2025-09-08 01:02:23.667646 | orchestrator | 2025-09-08 01:02:23.667656 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:23.667667 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:00.065) 0:01:47.045 ****** 2025-09-08 01:02:23.667678 | orchestrator | 2025-09-08 01:02:23.667689 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:23.667700 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:00.062) 0:01:47.108 ****** 2025-09-08 01:02:23.667710 | orchestrator | 2025-09-08 01:02:23.667721 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:23.667732 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:00.190) 0:01:47.299 ****** 2025-09-08 01:02:23.667743 | orchestrator | 2025-09-08 01:02:23.667753 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:23.667764 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:00.062) 0:01:47.362 ****** 2025-09-08 01:02:23.667775 | orchestrator | 2025-09-08 01:02:23.667785 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:23.667796 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:00.061) 0:01:47.423 ****** 2025-09-08 01:02:23.667807 | orchestrator | 2025-09-08 01:02:23.667818 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-08 01:02:23.667828 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:00.063) 0:01:47.487 ****** 2025-09-08 01:02:23.667839 | orchestrator | 2025-09-08 01:02:23.667850 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-08 01:02:23.667861 | orchestrator | Monday 08 September 2025 01:00:34 +0000 (0:00:00.079) 0:01:47.566 ****** 2025-09-08 01:02:23.667879 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:23.667890 | orchestrator | 2025-09-08 01:02:23.667901 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-08 01:02:23.667911 | orchestrator | Monday 08 September 2025 01:00:57 +0000 (0:00:22.841) 0:02:10.408 ****** 2025-09-08 01:02:23.667922 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:23.667933 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:23.667943 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:23.667954 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:23.667965 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:23.667976 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:23.667986 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:23.667997 | orchestrator | 2025-09-08 01:02:23.668008 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-08 01:02:23.668019 | orchestrator | Monday 08 September 2025 01:01:10 +0000 (0:00:13.321) 0:02:23.730 ****** 2025-09-08 01:02:23.668029 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:23.668040 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:23.668051 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:23.668061 | orchestrator | 2025-09-08 01:02:23.668072 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-08 01:02:23.668083 | orchestrator | Monday 08 September 2025 01:01:21 +0000 (0:00:10.426) 0:02:34.156 ****** 2025-09-08 01:02:23.668094 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:23.668104 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:23.668115 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:23.668126 | orchestrator | 2025-09-08 01:02:23.668137 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-08 01:02:23.668147 | orchestrator | Monday 08 September 2025 01:01:33 +0000 (0:00:11.882) 0:02:46.039 ****** 2025-09-08 01:02:23.668158 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:23.668169 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:23.668184 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:23.668196 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:23.668206 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:23.668217 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:23.668228 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:23.668238 | orchestrator | 2025-09-08 01:02:23.668249 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-08 01:02:23.668260 | orchestrator | Monday 08 September 2025 01:01:49 +0000 (0:00:16.544) 0:03:02.584 ****** 2025-09-08 01:02:23.668270 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:23.668281 | orchestrator | 2025-09-08 01:02:23.668292 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-08 01:02:23.668303 | orchestrator | Monday 08 September 2025 01:02:02 +0000 (0:00:13.088) 0:03:15.672 ****** 2025-09-08 01:02:23.668313 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:23.668324 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:23.668335 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:23.668346 | orchestrator | 2025-09-08 01:02:23.668357 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-08 01:02:23.668367 | orchestrator | Monday 08 September 2025 01:02:07 +0000 (0:00:04.757) 0:03:20.430 ****** 2025-09-08 01:02:23.668378 | orchestrator | changed: [testbed-manager] 2025-09-08 01:02:23.668404 | orchestrator | 2025-09-08 01:02:23.668415 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-08 01:02:23.668426 | orchestrator | Monday 08 September 2025 01:02:13 +0000 (0:00:05.542) 0:03:25.972 ****** 2025-09-08 01:02:23.668437 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:23.668448 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:23.668458 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:23.668469 | orchestrator | 2025-09-08 01:02:23.668480 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:02:23.668498 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:02:23.668510 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:23.668526 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:23.668537 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:23.668548 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:02:23.668559 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:02:23.668570 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-08 01:02:23.668580 | orchestrator | 2025-09-08 01:02:23.668591 | orchestrator | 2025-09-08 01:02:23.668602 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:02:23.668613 | orchestrator | Monday 08 September 2025 01:02:20 +0000 (0:00:06.946) 0:03:32.918 ****** 2025-09-08 01:02:23.668624 | orchestrator | =============================================================================== 2025-09-08 01:02:23.668635 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 31.34s 2025-09-08 01:02:23.668646 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.84s 2025-09-08 01:02:23.668656 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.90s 2025-09-08 01:02:23.668667 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.54s 2025-09-08 01:02:23.668678 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.32s 2025-09-08 01:02:23.668689 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.09s 2025-09-08 01:02:23.668700 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.88s 2025-09-08 01:02:23.668711 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.43s 2025-09-08 01:02:23.668721 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.95s 2025-09-08 01:02:23.668732 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.31s 2025-09-08 01:02:23.668743 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.85s 2025-09-08 01:02:23.668753 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.54s 2025-09-08 01:02:23.668764 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.21s 2025-09-08 01:02:23.668775 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.76s 2025-09-08 01:02:23.668786 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.10s 2025-09-08 01:02:23.668797 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.06s 2025-09-08 01:02:23.668807 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.35s 2025-09-08 01:02:23.668818 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.09s 2025-09-08 01:02:23.668829 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.57s 2025-09-08 01:02:23.668854 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.39s 2025-09-08 01:02:23.668865 | orchestrator | 2025-09-08 01:02:23 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:23.668885 | orchestrator | 2025-09-08 01:02:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:26.704764 | orchestrator | 2025-09-08 01:02:26 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:26.706851 | orchestrator | 2025-09-08 01:02:26 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:26.708910 | orchestrator | 2025-09-08 01:02:26 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:26.710426 | orchestrator | 2025-09-08 01:02:26 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:26.710514 | orchestrator | 2025-09-08 01:02:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:29.753500 | orchestrator | 2025-09-08 01:02:29 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:29.754544 | orchestrator | 2025-09-08 01:02:29 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:29.756273 | orchestrator | 2025-09-08 01:02:29 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:29.758127 | orchestrator | 2025-09-08 01:02:29 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:29.758157 | orchestrator | 2025-09-08 01:02:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:32.813806 | orchestrator | 2025-09-08 01:02:32 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:32.816545 | orchestrator | 2025-09-08 01:02:32 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:32.819290 | orchestrator | 2025-09-08 01:02:32 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:32.821630 | orchestrator | 2025-09-08 01:02:32 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:32.821653 | orchestrator | 2025-09-08 01:02:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:35.865892 | orchestrator | 2025-09-08 01:02:35 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:35.867863 | orchestrator | 2025-09-08 01:02:35 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:35.870143 | orchestrator | 2025-09-08 01:02:35 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:35.872157 | orchestrator | 2025-09-08 01:02:35 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state STARTED 2025-09-08 01:02:35.872177 | orchestrator | 2025-09-08 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:38.919849 | orchestrator | 2025-09-08 01:02:38 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:38.920507 | orchestrator | 2025-09-08 01:02:38 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:38.922483 | orchestrator | 2025-09-08 01:02:38 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:38.924653 | orchestrator | 2025-09-08 01:02:38 | INFO  | Task 39fd2615-91ce-41d7-b333-ffc754d63f47 is in state SUCCESS 2025-09-08 01:02:38.926672 | orchestrator | 2025-09-08 01:02:38.926704 | orchestrator | 2025-09-08 01:02:38.926716 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:02:38.926801 | orchestrator | 2025-09-08 01:02:38.926814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:02:38.926824 | orchestrator | Monday 08 September 2025 00:59:04 +0000 (0:00:00.237) 0:00:00.237 ****** 2025-09-08 01:02:38.926835 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:02:38.926869 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:02:38.926880 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:02:38.926889 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:02:38.926962 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:02:38.926977 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:02:38.926987 | orchestrator | 2025-09-08 01:02:38.926997 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:02:38.927008 | orchestrator | Monday 08 September 2025 00:59:05 +0000 (0:00:00.603) 0:00:00.841 ****** 2025-09-08 01:02:38.927018 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-08 01:02:38.927337 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-08 01:02:38.927349 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-08 01:02:38.927359 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-08 01:02:38.927369 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-08 01:02:38.927378 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-08 01:02:38.927388 | orchestrator | 2025-09-08 01:02:38.927434 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-08 01:02:38.927444 | orchestrator | 2025-09-08 01:02:38.927454 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:02:38.927464 | orchestrator | Monday 08 September 2025 00:59:05 +0000 (0:00:00.707) 0:00:01.548 ****** 2025-09-08 01:02:38.927474 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:02:38.927485 | orchestrator | 2025-09-08 01:02:38.927806 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-08 01:02:38.927822 | orchestrator | Monday 08 September 2025 00:59:06 +0000 (0:00:01.024) 0:00:02.572 ****** 2025-09-08 01:02:38.927833 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-08 01:02:38.927842 | orchestrator | 2025-09-08 01:02:38.927852 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-08 01:02:38.927861 | orchestrator | Monday 08 September 2025 00:59:09 +0000 (0:00:02.975) 0:00:05.547 ****** 2025-09-08 01:02:38.927871 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-08 01:02:38.927882 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-08 01:02:38.927891 | orchestrator | 2025-09-08 01:02:38.927901 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-08 01:02:38.927911 | orchestrator | Monday 08 September 2025 00:59:15 +0000 (0:00:05.889) 0:00:11.437 ****** 2025-09-08 01:02:38.927920 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:02:38.927930 | orchestrator | 2025-09-08 01:02:38.927940 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-08 01:02:38.927949 | orchestrator | Monday 08 September 2025 00:59:19 +0000 (0:00:03.313) 0:00:14.751 ****** 2025-09-08 01:02:38.927959 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:02:38.927968 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-08 01:02:38.927978 | orchestrator | 2025-09-08 01:02:38.927987 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-08 01:02:38.928010 | orchestrator | Monday 08 September 2025 00:59:22 +0000 (0:00:03.729) 0:00:18.480 ****** 2025-09-08 01:02:38.928021 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:02:38.928031 | orchestrator | 2025-09-08 01:02:38.928040 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-08 01:02:38.928050 | orchestrator | Monday 08 September 2025 00:59:26 +0000 (0:00:03.795) 0:00:22.275 ****** 2025-09-08 01:02:38.928059 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-08 01:02:38.928069 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-08 01:02:38.928090 | orchestrator | 2025-09-08 01:02:38.928100 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-08 01:02:38.928109 | orchestrator | Monday 08 September 2025 00:59:34 +0000 (0:00:07.697) 0:00:29.972 ****** 2025-09-08 01:02:38.928158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.928173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.928196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.928211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928326 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.928376 | orchestrator | 2025-09-08 01:02:38.928386 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:02:38.928416 | orchestrator | Monday 08 September 2025 00:59:36 +0000 (0:00:02.500) 0:00:32.473 ****** 2025-09-08 01:02:38.928427 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.928438 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.928450 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.928461 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.928473 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.928484 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.928495 | orchestrator | 2025-09-08 01:02:38.928506 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:02:38.928519 | orchestrator | Monday 08 September 2025 00:59:37 +0000 (0:00:00.515) 0:00:32.989 ****** 2025-09-08 01:02:38.928530 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.928541 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.928553 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.928565 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:02:38.928577 | orchestrator | 2025-09-08 01:02:38.928588 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-08 01:02:38.928600 | orchestrator | Monday 08 September 2025 00:59:38 +0000 (0:00:00.882) 0:00:33.871 ****** 2025-09-08 01:02:38.928612 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-08 01:02:38.928624 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-08 01:02:38.928636 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-08 01:02:38.928647 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-08 01:02:38.928658 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-08 01:02:38.928670 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-08 01:02:38.928682 | orchestrator | 2025-09-08 01:02:38.928693 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-08 01:02:38.928706 | orchestrator | Monday 08 September 2025 00:59:39 +0000 (0:00:01.662) 0:00:35.533 ****** 2025-09-08 01:02:38.928720 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:02:38.928744 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:02:38.928785 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:02:38.928799 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:02:38.928810 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:02:38.928826 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-08 01:02:38.928842 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:02:38.928877 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:02:38.928890 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:02:38.928900 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:02:38.928922 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:02:38.928932 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-08 01:02:38.928942 | orchestrator | 2025-09-08 01:02:38.928952 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-08 01:02:38.928962 | orchestrator | Monday 08 September 2025 00:59:43 +0000 (0:00:03.661) 0:00:39.194 ****** 2025-09-08 01:02:38.928972 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:38.928983 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:38.928993 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-08 01:02:38.929002 | orchestrator | 2025-09-08 01:02:38.929012 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-08 01:02:38.929022 | orchestrator | Monday 08 September 2025 00:59:45 +0000 (0:00:02.319) 0:00:41.514 ****** 2025-09-08 01:02:38.929056 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-08 01:02:38.929067 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-08 01:02:38.929077 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-08 01:02:38.929087 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 01:02:38.929097 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 01:02:38.929106 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-08 01:02:38.929116 | orchestrator | 2025-09-08 01:02:38.929126 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-08 01:02:38.929135 | orchestrator | Monday 08 September 2025 00:59:50 +0000 (0:00:04.262) 0:00:45.777 ****** 2025-09-08 01:02:38.929145 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-08 01:02:38.929155 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-08 01:02:38.929165 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-08 01:02:38.929174 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-08 01:02:38.929196 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-08 01:02:38.929206 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-08 01:02:38.929215 | orchestrator | 2025-09-08 01:02:38.929225 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-08 01:02:38.929234 | orchestrator | Monday 08 September 2025 00:59:51 +0000 (0:00:01.241) 0:00:47.018 ****** 2025-09-08 01:02:38.929244 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.929254 | orchestrator | 2025-09-08 01:02:38.929263 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-08 01:02:38.929273 | orchestrator | Monday 08 September 2025 00:59:51 +0000 (0:00:00.199) 0:00:47.217 ****** 2025-09-08 01:02:38.929282 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.929292 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.929302 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.929311 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.929320 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.929330 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.929340 | orchestrator | 2025-09-08 01:02:38.929349 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:02:38.929359 | orchestrator | Monday 08 September 2025 00:59:52 +0000 (0:00:01.245) 0:00:48.463 ****** 2025-09-08 01:02:38.929370 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:02:38.929381 | orchestrator | 2025-09-08 01:02:38.929391 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-08 01:02:38.929444 | orchestrator | Monday 08 September 2025 00:59:53 +0000 (0:00:01.043) 0:00:49.507 ****** 2025-09-08 01:02:38.929461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.929472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.929514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.929533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.929669 | orchestrator | 2025-09-08 01:02:38.929680 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-08 01:02:38.929690 | orchestrator | Monday 08 September 2025 00:59:56 +0000 (0:00:02.929) 0:00:52.436 ****** 2025-09-08 01:02:38.929705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.929721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929731 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.929741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.929752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.929777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929793 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.929802 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.929819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929840 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.929850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929875 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.929885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929919 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.929929 | orchestrator | 2025-09-08 01:02:38.929939 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-08 01:02:38.929949 | orchestrator | Monday 08 September 2025 00:59:58 +0000 (0:00:01.468) 0:00:53.905 ****** 2025-09-08 01:02:38.929959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.929969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.929979 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.929994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.930010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930072 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.930089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.930100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930110 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.930120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930145 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.930161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930187 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.930197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.930218 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.930228 | orchestrator | 2025-09-08 01:02:38.930237 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-08 01:02:38.930247 | orchestrator | Monday 08 September 2025 01:00:00 +0000 (0:00:01.970) 0:00:55.875 ****** 2025-09-08 01:02:38.930265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.930285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.930302 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.930323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930527 | orchestrator | 2025-09-08 01:02:38.930537 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-08 01:02:38.930547 | orchestrator | Monday 08 September 2025 01:00:03 +0000 (0:00:03.352) 0:00:59.228 ****** 2025-09-08 01:02:38.930557 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-08 01:02:38.930567 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.930576 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-08 01:02:38.930586 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.930596 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-08 01:02:38.930605 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-08 01:02:38.930615 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.930625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-08 01:02:38.930641 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-08 01:02:38.930651 | orchestrator | 2025-09-08 01:02:38.930660 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-08 01:02:38.930670 | orchestrator | Monday 08 September 2025 01:00:05 +0000 (0:00:02.183) 0:01:01.412 ****** 2025-09-08 01:02:38.930681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.930691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.930712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.930739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.930838 | orchestrator | 2025-09-08 01:02:38.930848 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-08 01:02:38.930858 | orchestrator | Monday 08 September 2025 01:00:15 +0000 (0:00:09.819) 0:01:11.231 ****** 2025-09-08 01:02:38.930874 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.930884 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.930893 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.930903 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:38.930912 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:38.930921 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:38.930931 | orchestrator | 2025-09-08 01:02:38.930941 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-08 01:02:38.930950 | orchestrator | Monday 08 September 2025 01:00:18 +0000 (0:00:02.950) 0:01:14.182 ****** 2025-09-08 01:02:38.930966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.930977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.930992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931013 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.931023 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.931033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-08 01:02:38.931049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931060 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.931074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931094 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.931110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931139 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.931149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-08 01:02:38.931177 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.931187 | orchestrator | 2025-09-08 01:02:38.931196 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-08 01:02:38.931206 | orchestrator | Monday 08 September 2025 01:00:20 +0000 (0:00:01.562) 0:01:15.744 ****** 2025-09-08 01:02:38.931216 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.931225 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.931234 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.931244 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.931253 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.931262 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.931272 | orchestrator | 2025-09-08 01:02:38.931282 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-08 01:02:38.931291 | orchestrator | Monday 08 September 2025 01:00:21 +0000 (0:00:01.095) 0:01:16.839 ****** 2025-09-08 01:02:38.931308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.931325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.931336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-08 01:02:38.931350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-08 01:02:38.931486 | orchestrator | 2025-09-08 01:02:38.931496 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-08 01:02:38.931506 | orchestrator | Monday 08 September 2025 01:00:23 +0000 (0:00:02.453) 0:01:19.293 ****** 2025-09-08 01:02:38.931515 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.931525 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:02:38.931535 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:02:38.931544 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:02:38.931554 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:02:38.931563 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:02:38.931573 | orchestrator | 2025-09-08 01:02:38.931582 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-08 01:02:38.931592 | orchestrator | Monday 08 September 2025 01:00:24 +0000 (0:00:00.922) 0:01:20.216 ****** 2025-09-08 01:02:38.931601 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:38.931611 | orchestrator | 2025-09-08 01:02:38.931621 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-08 01:02:38.931631 | orchestrator | Monday 08 September 2025 01:00:26 +0000 (0:00:02.002) 0:01:22.218 ****** 2025-09-08 01:02:38.931640 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:38.931650 | orchestrator | 2025-09-08 01:02:38.931659 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-08 01:02:38.931669 | orchestrator | Monday 08 September 2025 01:00:28 +0000 (0:00:02.350) 0:01:24.569 ****** 2025-09-08 01:02:38.931678 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:38.931688 | orchestrator | 2025-09-08 01:02:38.931697 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:02:38.931707 | orchestrator | Monday 08 September 2025 01:00:47 +0000 (0:00:19.047) 0:01:43.617 ****** 2025-09-08 01:02:38.931716 | orchestrator | 2025-09-08 01:02:38.931726 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:02:38.931736 | orchestrator | Monday 08 September 2025 01:00:48 +0000 (0:00:00.116) 0:01:43.734 ****** 2025-09-08 01:02:38.931745 | orchestrator | 2025-09-08 01:02:38.931755 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:02:38.931764 | orchestrator | Monday 08 September 2025 01:00:48 +0000 (0:00:00.134) 0:01:43.869 ****** 2025-09-08 01:02:38.931774 | orchestrator | 2025-09-08 01:02:38.931783 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:02:38.931793 | orchestrator | Monday 08 September 2025 01:00:48 +0000 (0:00:00.077) 0:01:43.947 ****** 2025-09-08 01:02:38.931802 | orchestrator | 2025-09-08 01:02:38.931812 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:02:38.931826 | orchestrator | Monday 08 September 2025 01:00:48 +0000 (0:00:00.116) 0:01:44.063 ****** 2025-09-08 01:02:38.931836 | orchestrator | 2025-09-08 01:02:38.931846 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-08 01:02:38.931856 | orchestrator | Monday 08 September 2025 01:00:48 +0000 (0:00:00.099) 0:01:44.162 ****** 2025-09-08 01:02:38.931865 | orchestrator | 2025-09-08 01:02:38.931880 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-08 01:02:38.931890 | orchestrator | Monday 08 September 2025 01:00:48 +0000 (0:00:00.116) 0:01:44.278 ****** 2025-09-08 01:02:38.931906 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:38.931916 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:38.931926 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:38.931935 | orchestrator | 2025-09-08 01:02:38.931945 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-08 01:02:38.931954 | orchestrator | Monday 08 September 2025 01:01:13 +0000 (0:00:24.577) 0:02:08.856 ****** 2025-09-08 01:02:38.931964 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:02:38.931973 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:02:38.931983 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:02:38.931993 | orchestrator | 2025-09-08 01:02:38.932002 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-08 01:02:38.932012 | orchestrator | Monday 08 September 2025 01:01:24 +0000 (0:00:11.683) 0:02:20.539 ****** 2025-09-08 01:02:38.932022 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:38.932031 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:38.932041 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:38.932050 | orchestrator | 2025-09-08 01:02:38.932060 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-08 01:02:38.932069 | orchestrator | Monday 08 September 2025 01:02:31 +0000 (0:01:06.582) 0:03:27.122 ****** 2025-09-08 01:02:38.932079 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:02:38.932089 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:02:38.932098 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:02:38.932108 | orchestrator | 2025-09-08 01:02:38.932117 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-08 01:02:38.932127 | orchestrator | Monday 08 September 2025 01:02:37 +0000 (0:00:05.594) 0:03:32.716 ****** 2025-09-08 01:02:38.932137 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:02:38.932146 | orchestrator | 2025-09-08 01:02:38.932156 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:02:38.932170 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-08 01:02:38.932180 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-08 01:02:38.932190 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-08 01:02:38.932200 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:02:38.932210 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:02:38.932219 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-08 01:02:38.932229 | orchestrator | 2025-09-08 01:02:38.932238 | orchestrator | 2025-09-08 01:02:38.932248 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:02:38.932258 | orchestrator | Monday 08 September 2025 01:02:37 +0000 (0:00:00.640) 0:03:33.357 ****** 2025-09-08 01:02:38.932267 | orchestrator | =============================================================================== 2025-09-08 01:02:38.932277 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 66.58s 2025-09-08 01:02:38.932287 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.58s 2025-09-08 01:02:38.932296 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.05s 2025-09-08 01:02:38.932306 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.68s 2025-09-08 01:02:38.932315 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.82s 2025-09-08 01:02:38.932330 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.70s 2025-09-08 01:02:38.932340 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.89s 2025-09-08 01:02:38.932350 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.59s 2025-09-08 01:02:38.932359 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.26s 2025-09-08 01:02:38.932369 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.80s 2025-09-08 01:02:38.932378 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.73s 2025-09-08 01:02:38.932388 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.66s 2025-09-08 01:02:38.932412 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.35s 2025-09-08 01:02:38.932423 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.31s 2025-09-08 01:02:38.932433 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.98s 2025-09-08 01:02:38.932442 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.95s 2025-09-08 01:02:38.932452 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.93s 2025-09-08 01:02:38.932461 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.50s 2025-09-08 01:02:38.932475 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.45s 2025-09-08 01:02:38.932485 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.35s 2025-09-08 01:02:38.932495 | orchestrator | 2025-09-08 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:41.972241 | orchestrator | 2025-09-08 01:02:41 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:41.974732 | orchestrator | 2025-09-08 01:02:41 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:41.977488 | orchestrator | 2025-09-08 01:02:41 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:41.979554 | orchestrator | 2025-09-08 01:02:41 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:02:41.979754 | orchestrator | 2025-09-08 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:45.010914 | orchestrator | 2025-09-08 01:02:45 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:45.011139 | orchestrator | 2025-09-08 01:02:45 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:45.011861 | orchestrator | 2025-09-08 01:02:45 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:45.012784 | orchestrator | 2025-09-08 01:02:45 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:02:45.012804 | orchestrator | 2025-09-08 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:48.045326 | orchestrator | 2025-09-08 01:02:48 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:48.047479 | orchestrator | 2025-09-08 01:02:48 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:48.053679 | orchestrator | 2025-09-08 01:02:48 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:48.057211 | orchestrator | 2025-09-08 01:02:48 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:02:48.057250 | orchestrator | 2025-09-08 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:51.091955 | orchestrator | 2025-09-08 01:02:51 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:51.094175 | orchestrator | 2025-09-08 01:02:51 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:51.094999 | orchestrator | 2025-09-08 01:02:51 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:51.095982 | orchestrator | 2025-09-08 01:02:51 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:02:51.096193 | orchestrator | 2025-09-08 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:54.164024 | orchestrator | 2025-09-08 01:02:54 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:54.164594 | orchestrator | 2025-09-08 01:02:54 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:54.165023 | orchestrator | 2025-09-08 01:02:54 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:54.165855 | orchestrator | 2025-09-08 01:02:54 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:02:54.165879 | orchestrator | 2025-09-08 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:02:57.218118 | orchestrator | 2025-09-08 01:02:57 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:02:57.218539 | orchestrator | 2025-09-08 01:02:57 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:02:57.219233 | orchestrator | 2025-09-08 01:02:57 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:02:57.220146 | orchestrator | 2025-09-08 01:02:57 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:02:57.220168 | orchestrator | 2025-09-08 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:00.255238 | orchestrator | 2025-09-08 01:03:00 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:00.255361 | orchestrator | 2025-09-08 01:03:00 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:00.255981 | orchestrator | 2025-09-08 01:03:00 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:00.257221 | orchestrator | 2025-09-08 01:03:00 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:00.257312 | orchestrator | 2025-09-08 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:03.288013 | orchestrator | 2025-09-08 01:03:03 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:03.289774 | orchestrator | 2025-09-08 01:03:03 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:03.290279 | orchestrator | 2025-09-08 01:03:03 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:03.290952 | orchestrator | 2025-09-08 01:03:03 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:03.290976 | orchestrator | 2025-09-08 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:06.316806 | orchestrator | 2025-09-08 01:03:06 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:06.317979 | orchestrator | 2025-09-08 01:03:06 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:06.318899 | orchestrator | 2025-09-08 01:03:06 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:06.319608 | orchestrator | 2025-09-08 01:03:06 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:06.319631 | orchestrator | 2025-09-08 01:03:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:09.353210 | orchestrator | 2025-09-08 01:03:09 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:09.353490 | orchestrator | 2025-09-08 01:03:09 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:09.354942 | orchestrator | 2025-09-08 01:03:09 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:09.355298 | orchestrator | 2025-09-08 01:03:09 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:09.355321 | orchestrator | 2025-09-08 01:03:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:12.380793 | orchestrator | 2025-09-08 01:03:12 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:12.381038 | orchestrator | 2025-09-08 01:03:12 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:12.381764 | orchestrator | 2025-09-08 01:03:12 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:12.382534 | orchestrator | 2025-09-08 01:03:12 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:12.382567 | orchestrator | 2025-09-08 01:03:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:15.418822 | orchestrator | 2025-09-08 01:03:15 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:15.419138 | orchestrator | 2025-09-08 01:03:15 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:15.419800 | orchestrator | 2025-09-08 01:03:15 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:15.420329 | orchestrator | 2025-09-08 01:03:15 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:15.420351 | orchestrator | 2025-09-08 01:03:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:18.448849 | orchestrator | 2025-09-08 01:03:18 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:18.451334 | orchestrator | 2025-09-08 01:03:18 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:18.451996 | orchestrator | 2025-09-08 01:03:18 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:18.452755 | orchestrator | 2025-09-08 01:03:18 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:18.452781 | orchestrator | 2025-09-08 01:03:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:21.517726 | orchestrator | 2025-09-08 01:03:21 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:21.518241 | orchestrator | 2025-09-08 01:03:21 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:21.519385 | orchestrator | 2025-09-08 01:03:21 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:21.520277 | orchestrator | 2025-09-08 01:03:21 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:21.520321 | orchestrator | 2025-09-08 01:03:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:24.558514 | orchestrator | 2025-09-08 01:03:24 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:24.559014 | orchestrator | 2025-09-08 01:03:24 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:24.559771 | orchestrator | 2025-09-08 01:03:24 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:24.560753 | orchestrator | 2025-09-08 01:03:24 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:24.560776 | orchestrator | 2025-09-08 01:03:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:27.599250 | orchestrator | 2025-09-08 01:03:27 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:27.599371 | orchestrator | 2025-09-08 01:03:27 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:27.600033 | orchestrator | 2025-09-08 01:03:27 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:27.600832 | orchestrator | 2025-09-08 01:03:27 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:27.600853 | orchestrator | 2025-09-08 01:03:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:30.628806 | orchestrator | 2025-09-08 01:03:30 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:30.629027 | orchestrator | 2025-09-08 01:03:30 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:30.629662 | orchestrator | 2025-09-08 01:03:30 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:30.630231 | orchestrator | 2025-09-08 01:03:30 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:30.630324 | orchestrator | 2025-09-08 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:33.653987 | orchestrator | 2025-09-08 01:03:33 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:33.654289 | orchestrator | 2025-09-08 01:03:33 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:33.655166 | orchestrator | 2025-09-08 01:03:33 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:33.655873 | orchestrator | 2025-09-08 01:03:33 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:33.655894 | orchestrator | 2025-09-08 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:36.689367 | orchestrator | 2025-09-08 01:03:36 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:36.690593 | orchestrator | 2025-09-08 01:03:36 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:36.691310 | orchestrator | 2025-09-08 01:03:36 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:36.692053 | orchestrator | 2025-09-08 01:03:36 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:36.692187 | orchestrator | 2025-09-08 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:39.721570 | orchestrator | 2025-09-08 01:03:39 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:39.722422 | orchestrator | 2025-09-08 01:03:39 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:39.723213 | orchestrator | 2025-09-08 01:03:39 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:39.724642 | orchestrator | 2025-09-08 01:03:39 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:39.724780 | orchestrator | 2025-09-08 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:42.758882 | orchestrator | 2025-09-08 01:03:42 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:42.762852 | orchestrator | 2025-09-08 01:03:42 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:42.763981 | orchestrator | 2025-09-08 01:03:42 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:42.764446 | orchestrator | 2025-09-08 01:03:42 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:42.764604 | orchestrator | 2025-09-08 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:45.830247 | orchestrator | 2025-09-08 01:03:45 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:45.831785 | orchestrator | 2025-09-08 01:03:45 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:45.831816 | orchestrator | 2025-09-08 01:03:45 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:45.832392 | orchestrator | 2025-09-08 01:03:45 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:45.832672 | orchestrator | 2025-09-08 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:48.860793 | orchestrator | 2025-09-08 01:03:48 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:48.860911 | orchestrator | 2025-09-08 01:03:48 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:48.861250 | orchestrator | 2025-09-08 01:03:48 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:48.862638 | orchestrator | 2025-09-08 01:03:48 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:48.862747 | orchestrator | 2025-09-08 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:51.898107 | orchestrator | 2025-09-08 01:03:51 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:51.898565 | orchestrator | 2025-09-08 01:03:51 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:51.899578 | orchestrator | 2025-09-08 01:03:51 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:51.900604 | orchestrator | 2025-09-08 01:03:51 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:51.900838 | orchestrator | 2025-09-08 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:54.937686 | orchestrator | 2025-09-08 01:03:54 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:54.939442 | orchestrator | 2025-09-08 01:03:54 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:54.941173 | orchestrator | 2025-09-08 01:03:54 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:54.942792 | orchestrator | 2025-09-08 01:03:54 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:54.942980 | orchestrator | 2025-09-08 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:03:57.981337 | orchestrator | 2025-09-08 01:03:57 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:03:57.983409 | orchestrator | 2025-09-08 01:03:57 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:03:57.984343 | orchestrator | 2025-09-08 01:03:57 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:03:57.985292 | orchestrator | 2025-09-08 01:03:57 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:03:57.985313 | orchestrator | 2025-09-08 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:01.014529 | orchestrator | 2025-09-08 01:04:01 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:01.014944 | orchestrator | 2025-09-08 01:04:01 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:01.015744 | orchestrator | 2025-09-08 01:04:01 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:01.016766 | orchestrator | 2025-09-08 01:04:01 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:01.016874 | orchestrator | 2025-09-08 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:04.044246 | orchestrator | 2025-09-08 01:04:04 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:04.044660 | orchestrator | 2025-09-08 01:04:04 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:04.046307 | orchestrator | 2025-09-08 01:04:04 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:04.047149 | orchestrator | 2025-09-08 01:04:04 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:04.047359 | orchestrator | 2025-09-08 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:07.080563 | orchestrator | 2025-09-08 01:04:07 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:07.081936 | orchestrator | 2025-09-08 01:04:07 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:07.083326 | orchestrator | 2025-09-08 01:04:07 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:07.084771 | orchestrator | 2025-09-08 01:04:07 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:07.085688 | orchestrator | 2025-09-08 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:10.115756 | orchestrator | 2025-09-08 01:04:10 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:10.116001 | orchestrator | 2025-09-08 01:04:10 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:10.116832 | orchestrator | 2025-09-08 01:04:10 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:10.117537 | orchestrator | 2025-09-08 01:04:10 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:10.117798 | orchestrator | 2025-09-08 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:13.143132 | orchestrator | 2025-09-08 01:04:13 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:13.143541 | orchestrator | 2025-09-08 01:04:13 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:13.144359 | orchestrator | 2025-09-08 01:04:13 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:13.145332 | orchestrator | 2025-09-08 01:04:13 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:13.145359 | orchestrator | 2025-09-08 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:16.175982 | orchestrator | 2025-09-08 01:04:16 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:16.176278 | orchestrator | 2025-09-08 01:04:16 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:16.176990 | orchestrator | 2025-09-08 01:04:16 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:16.177659 | orchestrator | 2025-09-08 01:04:16 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:16.177865 | orchestrator | 2025-09-08 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:19.206084 | orchestrator | 2025-09-08 01:04:19 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:19.206200 | orchestrator | 2025-09-08 01:04:19 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:19.206836 | orchestrator | 2025-09-08 01:04:19 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:19.207969 | orchestrator | 2025-09-08 01:04:19 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:19.207995 | orchestrator | 2025-09-08 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:22.239943 | orchestrator | 2025-09-08 01:04:22 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:22.240244 | orchestrator | 2025-09-08 01:04:22 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:22.245204 | orchestrator | 2025-09-08 01:04:22 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state STARTED 2025-09-08 01:04:22.245971 | orchestrator | 2025-09-08 01:04:22 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:22.246403 | orchestrator | 2025-09-08 01:04:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:25.273445 | orchestrator | 2025-09-08 01:04:25 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:25.273699 | orchestrator | 2025-09-08 01:04:25 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:25.275011 | orchestrator | 2025-09-08 01:04:25 | INFO  | Task 9bd6519c-ad65-464c-bbd7-93c9e5431f73 is in state SUCCESS 2025-09-08 01:04:25.276661 | orchestrator | 2025-09-08 01:04:25.276748 | orchestrator | 2025-09-08 01:04:25.276762 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:04:25.276796 | orchestrator | 2025-09-08 01:04:25.276807 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:04:25.276817 | orchestrator | Monday 08 September 2025 01:02:24 +0000 (0:00:00.272) 0:00:00.272 ****** 2025-09-08 01:04:25.276827 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:04:25.276839 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:04:25.276866 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:04:25.276876 | orchestrator | 2025-09-08 01:04:25.276887 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:04:25.276897 | orchestrator | Monday 08 September 2025 01:02:25 +0000 (0:00:00.285) 0:00:00.558 ****** 2025-09-08 01:04:25.276907 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-08 01:04:25.276918 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-08 01:04:25.276928 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-08 01:04:25.276938 | orchestrator | 2025-09-08 01:04:25.276948 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-08 01:04:25.276958 | orchestrator | 2025-09-08 01:04:25.276968 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-08 01:04:25.276978 | orchestrator | Monday 08 September 2025 01:02:25 +0000 (0:00:00.412) 0:00:00.970 ****** 2025-09-08 01:04:25.276988 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:04:25.276999 | orchestrator | 2025-09-08 01:04:25.277009 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-08 01:04:25.277019 | orchestrator | Monday 08 September 2025 01:02:26 +0000 (0:00:00.556) 0:00:01.527 ****** 2025-09-08 01:04:25.277050 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-08 01:04:25.277061 | orchestrator | 2025-09-08 01:04:25.277072 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-08 01:04:25.277082 | orchestrator | Monday 08 September 2025 01:02:29 +0000 (0:00:03.339) 0:00:04.866 ****** 2025-09-08 01:04:25.277091 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-08 01:04:25.277101 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-08 01:04:25.277111 | orchestrator | 2025-09-08 01:04:25.277121 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-08 01:04:25.277130 | orchestrator | Monday 08 September 2025 01:02:35 +0000 (0:00:06.522) 0:00:11.388 ****** 2025-09-08 01:04:25.277141 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:04:25.277151 | orchestrator | 2025-09-08 01:04:25.277175 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-08 01:04:25.277185 | orchestrator | Monday 08 September 2025 01:02:39 +0000 (0:00:03.593) 0:00:14.982 ****** 2025-09-08 01:04:25.277195 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:04:25.277205 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-08 01:04:25.277214 | orchestrator | 2025-09-08 01:04:25.277224 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-08 01:04:25.277233 | orchestrator | Monday 08 September 2025 01:02:43 +0000 (0:00:04.020) 0:00:19.002 ****** 2025-09-08 01:04:25.277243 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:04:25.277253 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-08 01:04:25.277264 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-08 01:04:25.277276 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-08 01:04:25.277288 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-08 01:04:25.277300 | orchestrator | 2025-09-08 01:04:25.277312 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-08 01:04:25.277324 | orchestrator | Monday 08 September 2025 01:02:59 +0000 (0:00:16.309) 0:00:35.311 ****** 2025-09-08 01:04:25.277335 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-08 01:04:25.277346 | orchestrator | 2025-09-08 01:04:25.277359 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-08 01:04:25.277370 | orchestrator | Monday 08 September 2025 01:03:04 +0000 (0:00:04.813) 0:00:40.125 ****** 2025-09-08 01:04:25.277386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.277422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.277444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.277457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277567 | orchestrator | 2025-09-08 01:04:25.277578 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-08 01:04:25.277590 | orchestrator | Monday 08 September 2025 01:03:06 +0000 (0:00:02.216) 0:00:42.342 ****** 2025-09-08 01:04:25.277603 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-08 01:04:25.277614 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-08 01:04:25.277625 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-08 01:04:25.277634 | orchestrator | 2025-09-08 01:04:25.277644 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-08 01:04:25.277654 | orchestrator | Monday 08 September 2025 01:03:08 +0000 (0:00:01.532) 0:00:43.875 ****** 2025-09-08 01:04:25.277663 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:25.277673 | orchestrator | 2025-09-08 01:04:25.277683 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-08 01:04:25.277692 | orchestrator | Monday 08 September 2025 01:03:08 +0000 (0:00:00.113) 0:00:43.988 ****** 2025-09-08 01:04:25.277702 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:25.277712 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:25.277722 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:25.277731 | orchestrator | 2025-09-08 01:04:25.277741 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-08 01:04:25.277750 | orchestrator | Monday 08 September 2025 01:03:08 +0000 (0:00:00.410) 0:00:44.399 ****** 2025-09-08 01:04:25.277760 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:04:25.277769 | orchestrator | 2025-09-08 01:04:25.277779 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-08 01:04:25.277788 | orchestrator | Monday 08 September 2025 01:03:09 +0000 (0:00:00.489) 0:00:44.888 ****** 2025-09-08 01:04:25.277799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.277826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.277837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.277848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.277927 | orchestrator | 2025-09-08 01:04:25.277937 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-08 01:04:25.277957 | orchestrator | Monday 08 September 2025 01:03:13 +0000 (0:00:04.346) 0:00:49.235 ****** 2025-09-08 01:04:25.277968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.277979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.277989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278005 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:25.278076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.278091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.278113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278123 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:25.278133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278160 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:25.278170 | orchestrator | 2025-09-08 01:04:25.278186 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-08 01:04:25.278196 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:01.550) 0:00:50.786 ****** 2025-09-08 01:04:25.278212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.278222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278243 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:25.278253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.278275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278306 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:25.278317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.278327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.278364 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:25.278374 | orchestrator | 2025-09-08 01:04:25.278384 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-08 01:04:25.278394 | orchestrator | Monday 08 September 2025 01:03:16 +0000 (0:00:00.981) 0:00:51.767 ****** 2025-09-08 01:04:25.278404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.278814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.278935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.278952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.278993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279081 | orchestrator | 2025-09-08 01:04:25.279095 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-08 01:04:25.279108 | orchestrator | Monday 08 September 2025 01:03:20 +0000 (0:00:03.852) 0:00:55.619 ****** 2025-09-08 01:04:25.279119 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:25.279131 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:25.279142 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:25.279152 | orchestrator | 2025-09-08 01:04:25.279164 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-08 01:04:25.279175 | orchestrator | Monday 08 September 2025 01:03:22 +0000 (0:00:02.463) 0:00:58.083 ****** 2025-09-08 01:04:25.279195 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:04:25.279206 | orchestrator | 2025-09-08 01:04:25.279217 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-08 01:04:25.279228 | orchestrator | Monday 08 September 2025 01:03:24 +0000 (0:00:01.668) 0:00:59.753 ****** 2025-09-08 01:04:25.279239 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:25.279250 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:25.279260 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:25.279271 | orchestrator | 2025-09-08 01:04:25.279282 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-08 01:04:25.279293 | orchestrator | Monday 08 September 2025 01:03:25 +0000 (0:00:00.901) 0:01:00.654 ****** 2025-09-08 01:04:25.279308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.279331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.279350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.279365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279465 | orchestrator | 2025-09-08 01:04:25.279479 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-08 01:04:25.279516 | orchestrator | Monday 08 September 2025 01:03:34 +0000 (0:00:09.702) 0:01:10.357 ****** 2025-09-08 01:04:25.279530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.279550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.279565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.279578 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:25.279597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.279616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-08 01:04:25.279636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.279650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.279662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.279673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:04:25.279685 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:25.279696 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:25.279707 | orchestrator | 2025-09-08 01:04:25.279718 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-08 01:04:25.279729 | orchestrator | Monday 08 September 2025 01:03:35 +0000 (0:00:00.959) 0:01:11.316 ****** 2025-09-08 01:04:25.279753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.279767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.279785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-08 01:04:25.279796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:04:25.279886 | orchestrator | 2025-09-08 01:04:25.279898 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-08 01:04:25.279909 | orchestrator | Monday 08 September 2025 01:03:39 +0000 (0:00:03.423) 0:01:14.740 ****** 2025-09-08 01:04:25.279920 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:04:25.279931 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:04:25.279942 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:04:25.279953 | orchestrator | 2025-09-08 01:04:25.279964 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-08 01:04:25.279975 | orchestrator | Monday 08 September 2025 01:03:39 +0000 (0:00:00.607) 0:01:15.347 ****** 2025-09-08 01:04:25.279985 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:25.279996 | orchestrator | 2025-09-08 01:04:25.280007 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-08 01:04:25.280018 | orchestrator | Monday 08 September 2025 01:03:42 +0000 (0:00:02.461) 0:01:17.809 ****** 2025-09-08 01:04:25.280029 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:25.280040 | orchestrator | 2025-09-08 01:04:25.280050 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-08 01:04:25.280061 | orchestrator | Monday 08 September 2025 01:03:44 +0000 (0:00:02.543) 0:01:20.352 ****** 2025-09-08 01:04:25.280072 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:25.280083 | orchestrator | 2025-09-08 01:04:25.280093 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-08 01:04:25.280104 | orchestrator | Monday 08 September 2025 01:03:57 +0000 (0:00:12.424) 0:01:32.777 ****** 2025-09-08 01:04:25.280115 | orchestrator | 2025-09-08 01:04:25.280126 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-08 01:04:25.280137 | orchestrator | Monday 08 September 2025 01:03:57 +0000 (0:00:00.223) 0:01:33.000 ****** 2025-09-08 01:04:25.280147 | orchestrator | 2025-09-08 01:04:25.280158 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-08 01:04:25.280169 | orchestrator | Monday 08 September 2025 01:03:57 +0000 (0:00:00.239) 0:01:33.239 ****** 2025-09-08 01:04:25.280180 | orchestrator | 2025-09-08 01:04:25.280191 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-08 01:04:25.280202 | orchestrator | Monday 08 September 2025 01:03:57 +0000 (0:00:00.162) 0:01:33.402 ****** 2025-09-08 01:04:25.280225 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:25.280236 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:25.280247 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:25.280258 | orchestrator | 2025-09-08 01:04:25.280269 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-08 01:04:25.280280 | orchestrator | Monday 08 September 2025 01:04:09 +0000 (0:00:11.452) 0:01:44.855 ****** 2025-09-08 01:04:25.280291 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:25.280302 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:25.280318 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:25.280330 | orchestrator | 2025-09-08 01:04:25.280341 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-08 01:04:25.280351 | orchestrator | Monday 08 September 2025 01:04:16 +0000 (0:00:06.839) 0:01:51.695 ****** 2025-09-08 01:04:25.280362 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:04:25.280373 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:04:25.280384 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:04:25.280394 | orchestrator | 2025-09-08 01:04:25.280410 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:04:25.280423 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:04:25.280436 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:04:25.280447 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:04:25.280457 | orchestrator | 2025-09-08 01:04:25.280468 | orchestrator | 2025-09-08 01:04:25.280479 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:04:25.280515 | orchestrator | Monday 08 September 2025 01:04:23 +0000 (0:00:07.213) 0:01:58.908 ****** 2025-09-08 01:04:25.280527 | orchestrator | =============================================================================== 2025-09-08 01:04:25.280538 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.31s 2025-09-08 01:04:25.280549 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.42s 2025-09-08 01:04:25.280560 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.45s 2025-09-08 01:04:25.280570 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.70s 2025-09-08 01:04:25.280581 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.21s 2025-09-08 01:04:25.280592 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.84s 2025-09-08 01:04:25.280603 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.52s 2025-09-08 01:04:25.280613 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.81s 2025-09-08 01:04:25.280624 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.35s 2025-09-08 01:04:25.280635 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.02s 2025-09-08 01:04:25.280646 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.85s 2025-09-08 01:04:25.280656 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.59s 2025-09-08 01:04:25.280667 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.42s 2025-09-08 01:04:25.280678 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.34s 2025-09-08 01:04:25.280689 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.54s 2025-09-08 01:04:25.280700 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.47s 2025-09-08 01:04:25.280710 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.46s 2025-09-08 01:04:25.280728 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.22s 2025-09-08 01:04:25.280739 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.67s 2025-09-08 01:04:25.280750 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.55s 2025-09-08 01:04:25.280761 | orchestrator | 2025-09-08 01:04:25 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:25.280773 | orchestrator | 2025-09-08 01:04:25 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:25.280784 | orchestrator | 2025-09-08 01:04:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:28.307947 | orchestrator | 2025-09-08 01:04:28 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:28.308658 | orchestrator | 2025-09-08 01:04:28 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:28.310172 | orchestrator | 2025-09-08 01:04:28 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:28.310986 | orchestrator | 2025-09-08 01:04:28 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:28.311013 | orchestrator | 2025-09-08 01:04:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:31.343084 | orchestrator | 2025-09-08 01:04:31 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:31.343184 | orchestrator | 2025-09-08 01:04:31 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:31.345602 | orchestrator | 2025-09-08 01:04:31 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:31.345628 | orchestrator | 2025-09-08 01:04:31 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:31.345641 | orchestrator | 2025-09-08 01:04:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:34.372752 | orchestrator | 2025-09-08 01:04:34 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:34.373637 | orchestrator | 2025-09-08 01:04:34 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:34.374448 | orchestrator | 2025-09-08 01:04:34 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:34.375006 | orchestrator | 2025-09-08 01:04:34 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:34.375131 | orchestrator | 2025-09-08 01:04:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:37.404734 | orchestrator | 2025-09-08 01:04:37 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:37.404953 | orchestrator | 2025-09-08 01:04:37 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:37.407454 | orchestrator | 2025-09-08 01:04:37 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:37.408097 | orchestrator | 2025-09-08 01:04:37 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:37.408203 | orchestrator | 2025-09-08 01:04:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:40.435089 | orchestrator | 2025-09-08 01:04:40 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:40.436088 | orchestrator | 2025-09-08 01:04:40 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:40.436821 | orchestrator | 2025-09-08 01:04:40 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:40.438986 | orchestrator | 2025-09-08 01:04:40 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:40.439018 | orchestrator | 2025-09-08 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:43.478574 | orchestrator | 2025-09-08 01:04:43 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:43.481238 | orchestrator | 2025-09-08 01:04:43 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:43.484243 | orchestrator | 2025-09-08 01:04:43 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:43.486278 | orchestrator | 2025-09-08 01:04:43 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:43.486706 | orchestrator | 2025-09-08 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:46.533717 | orchestrator | 2025-09-08 01:04:46 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:46.534157 | orchestrator | 2025-09-08 01:04:46 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:46.535003 | orchestrator | 2025-09-08 01:04:46 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:46.537381 | orchestrator | 2025-09-08 01:04:46 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:46.537527 | orchestrator | 2025-09-08 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:49.595026 | orchestrator | 2025-09-08 01:04:49 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:49.595259 | orchestrator | 2025-09-08 01:04:49 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:49.599096 | orchestrator | 2025-09-08 01:04:49 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:49.600660 | orchestrator | 2025-09-08 01:04:49 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:49.601208 | orchestrator | 2025-09-08 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:52.642783 | orchestrator | 2025-09-08 01:04:52 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:52.642958 | orchestrator | 2025-09-08 01:04:52 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:52.647346 | orchestrator | 2025-09-08 01:04:52 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:52.648091 | orchestrator | 2025-09-08 01:04:52 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:52.648115 | orchestrator | 2025-09-08 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:55.691626 | orchestrator | 2025-09-08 01:04:55 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:55.694068 | orchestrator | 2025-09-08 01:04:55 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:55.695040 | orchestrator | 2025-09-08 01:04:55 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:55.696148 | orchestrator | 2025-09-08 01:04:55 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:55.696168 | orchestrator | 2025-09-08 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:04:58.741869 | orchestrator | 2025-09-08 01:04:58 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:04:58.742859 | orchestrator | 2025-09-08 01:04:58 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:04:58.743625 | orchestrator | 2025-09-08 01:04:58 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:04:58.744364 | orchestrator | 2025-09-08 01:04:58 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:04:58.744387 | orchestrator | 2025-09-08 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:01.777784 | orchestrator | 2025-09-08 01:05:01 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:01.777999 | orchestrator | 2025-09-08 01:05:01 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:01.778715 | orchestrator | 2025-09-08 01:05:01 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:01.779659 | orchestrator | 2025-09-08 01:05:01 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:01.779684 | orchestrator | 2025-09-08 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:04.824638 | orchestrator | 2025-09-08 01:05:04 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:04.826172 | orchestrator | 2025-09-08 01:05:04 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:04.829828 | orchestrator | 2025-09-08 01:05:04 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:04.830911 | orchestrator | 2025-09-08 01:05:04 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:04.830936 | orchestrator | 2025-09-08 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:07.863085 | orchestrator | 2025-09-08 01:05:07 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:07.863726 | orchestrator | 2025-09-08 01:05:07 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:07.864867 | orchestrator | 2025-09-08 01:05:07 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:07.865627 | orchestrator | 2025-09-08 01:05:07 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:07.865651 | orchestrator | 2025-09-08 01:05:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:10.911917 | orchestrator | 2025-09-08 01:05:10 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:10.912874 | orchestrator | 2025-09-08 01:05:10 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:10.914352 | orchestrator | 2025-09-08 01:05:10 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:10.916217 | orchestrator | 2025-09-08 01:05:10 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:10.916293 | orchestrator | 2025-09-08 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:13.949669 | orchestrator | 2025-09-08 01:05:13 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:13.950171 | orchestrator | 2025-09-08 01:05:13 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:13.951674 | orchestrator | 2025-09-08 01:05:13 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:13.953666 | orchestrator | 2025-09-08 01:05:13 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:13.953687 | orchestrator | 2025-09-08 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:16.989960 | orchestrator | 2025-09-08 01:05:16 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:16.990267 | orchestrator | 2025-09-08 01:05:16 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:16.990993 | orchestrator | 2025-09-08 01:05:16 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:16.991748 | orchestrator | 2025-09-08 01:05:16 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:16.991771 | orchestrator | 2025-09-08 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:20.076827 | orchestrator | 2025-09-08 01:05:20 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:20.077076 | orchestrator | 2025-09-08 01:05:20 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:20.080048 | orchestrator | 2025-09-08 01:05:20 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:20.082718 | orchestrator | 2025-09-08 01:05:20 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:20.082743 | orchestrator | 2025-09-08 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:23.130201 | orchestrator | 2025-09-08 01:05:23 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:23.130789 | orchestrator | 2025-09-08 01:05:23 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:23.136743 | orchestrator | 2025-09-08 01:05:23 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:23.137636 | orchestrator | 2025-09-08 01:05:23 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:23.137663 | orchestrator | 2025-09-08 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:26.174213 | orchestrator | 2025-09-08 01:05:26 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:26.174813 | orchestrator | 2025-09-08 01:05:26 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:26.177508 | orchestrator | 2025-09-08 01:05:26 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:26.178347 | orchestrator | 2025-09-08 01:05:26 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:26.178377 | orchestrator | 2025-09-08 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:29.215718 | orchestrator | 2025-09-08 01:05:29 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:29.217663 | orchestrator | 2025-09-08 01:05:29 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:29.219494 | orchestrator | 2025-09-08 01:05:29 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:29.220858 | orchestrator | 2025-09-08 01:05:29 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:29.223813 | orchestrator | 2025-09-08 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:32.257750 | orchestrator | 2025-09-08 01:05:32 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:32.257965 | orchestrator | 2025-09-08 01:05:32 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:32.258588 | orchestrator | 2025-09-08 01:05:32 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:32.262234 | orchestrator | 2025-09-08 01:05:32 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:32.262289 | orchestrator | 2025-09-08 01:05:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:35.328794 | orchestrator | 2025-09-08 01:05:35 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:35.329376 | orchestrator | 2025-09-08 01:05:35 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:35.329859 | orchestrator | 2025-09-08 01:05:35 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:35.330817 | orchestrator | 2025-09-08 01:05:35 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:35.330841 | orchestrator | 2025-09-08 01:05:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:38.367488 | orchestrator | 2025-09-08 01:05:38 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:38.367764 | orchestrator | 2025-09-08 01:05:38 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:38.368295 | orchestrator | 2025-09-08 01:05:38 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:38.369793 | orchestrator | 2025-09-08 01:05:38 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:38.369822 | orchestrator | 2025-09-08 01:05:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:41.421315 | orchestrator | 2025-09-08 01:05:41 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:41.421938 | orchestrator | 2025-09-08 01:05:41 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:41.423130 | orchestrator | 2025-09-08 01:05:41 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:41.426131 | orchestrator | 2025-09-08 01:05:41 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:41.426163 | orchestrator | 2025-09-08 01:05:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:44.475199 | orchestrator | 2025-09-08 01:05:44 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:44.475309 | orchestrator | 2025-09-08 01:05:44 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:44.475323 | orchestrator | 2025-09-08 01:05:44 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:44.475336 | orchestrator | 2025-09-08 01:05:44 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:44.475347 | orchestrator | 2025-09-08 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:47.533100 | orchestrator | 2025-09-08 01:05:47 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:47.533255 | orchestrator | 2025-09-08 01:05:47 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:47.536580 | orchestrator | 2025-09-08 01:05:47 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:47.537889 | orchestrator | 2025-09-08 01:05:47 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:47.538064 | orchestrator | 2025-09-08 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:50.590636 | orchestrator | 2025-09-08 01:05:50 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:50.591906 | orchestrator | 2025-09-08 01:05:50 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:50.593730 | orchestrator | 2025-09-08 01:05:50 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:50.595260 | orchestrator | 2025-09-08 01:05:50 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:50.595287 | orchestrator | 2025-09-08 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:53.646285 | orchestrator | 2025-09-08 01:05:53 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:53.649270 | orchestrator | 2025-09-08 01:05:53 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:53.651062 | orchestrator | 2025-09-08 01:05:53 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:53.654478 | orchestrator | 2025-09-08 01:05:53 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state STARTED 2025-09-08 01:05:53.654503 | orchestrator | 2025-09-08 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:56.704251 | orchestrator | 2025-09-08 01:05:56 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:56.705336 | orchestrator | 2025-09-08 01:05:56 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:56.708738 | orchestrator | 2025-09-08 01:05:56 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:56.711934 | orchestrator | 2025-09-08 01:05:56 | INFO  | Task 09f6687d-8e8c-4267-8f23-f65446e3502c is in state SUCCESS 2025-09-08 01:05:56.713849 | orchestrator | 2025-09-08 01:05:56.713895 | orchestrator | 2025-09-08 01:05:56.713908 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:05:56.713920 | orchestrator | 2025-09-08 01:05:56.713931 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:05:56.713943 | orchestrator | Monday 08 September 2025 01:02:43 +0000 (0:00:00.508) 0:00:00.508 ****** 2025-09-08 01:05:56.713954 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:05:56.714127 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:05:56.714143 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:05:56.714154 | orchestrator | 2025-09-08 01:05:56.714166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:05:56.714177 | orchestrator | Monday 08 September 2025 01:02:44 +0000 (0:00:00.372) 0:00:00.881 ****** 2025-09-08 01:05:56.714238 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-08 01:05:56.714269 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-08 01:05:56.714281 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-08 01:05:56.714291 | orchestrator | 2025-09-08 01:05:56.714313 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-08 01:05:56.714328 | orchestrator | 2025-09-08 01:05:56.714346 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:05:56.714364 | orchestrator | Monday 08 September 2025 01:02:44 +0000 (0:00:00.406) 0:00:01.288 ****** 2025-09-08 01:05:56.714398 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:05:56.714419 | orchestrator | 2025-09-08 01:05:56.714437 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-08 01:05:56.714456 | orchestrator | Monday 08 September 2025 01:02:45 +0000 (0:00:00.952) 0:00:02.241 ****** 2025-09-08 01:05:56.714477 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-08 01:05:56.714490 | orchestrator | 2025-09-08 01:05:56.714509 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-08 01:05:56.714529 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:03.757) 0:00:05.998 ****** 2025-09-08 01:05:56.714590 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-08 01:05:56.714640 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-08 01:05:56.714661 | orchestrator | 2025-09-08 01:05:56.714680 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-08 01:05:56.714699 | orchestrator | Monday 08 September 2025 01:02:55 +0000 (0:00:06.695) 0:00:12.693 ****** 2025-09-08 01:05:56.714718 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:05:56.714736 | orchestrator | 2025-09-08 01:05:56.714756 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-08 01:05:56.714770 | orchestrator | Monday 08 September 2025 01:02:59 +0000 (0:00:03.373) 0:00:16.066 ****** 2025-09-08 01:05:56.714783 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:05:56.714796 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-08 01:05:56.714807 | orchestrator | 2025-09-08 01:05:56.714817 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-08 01:05:56.714828 | orchestrator | Monday 08 September 2025 01:03:03 +0000 (0:00:03.812) 0:00:19.879 ****** 2025-09-08 01:05:56.714839 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:05:56.714849 | orchestrator | 2025-09-08 01:05:56.714860 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-08 01:05:56.714870 | orchestrator | Monday 08 September 2025 01:03:06 +0000 (0:00:03.624) 0:00:23.504 ****** 2025-09-08 01:05:56.714881 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-08 01:05:56.714891 | orchestrator | 2025-09-08 01:05:56.714902 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-08 01:05:56.714912 | orchestrator | Monday 08 September 2025 01:03:11 +0000 (0:00:04.518) 0:00:28.022 ****** 2025-09-08 01:05:56.714926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.714963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.714976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.714997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715341 | orchestrator | 2025-09-08 01:05:56.715352 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-08 01:05:56.715364 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:04.028) 0:00:32.050 ****** 2025-09-08 01:05:56.715375 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:05:56.715386 | orchestrator | 2025-09-08 01:05:56.715397 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-08 01:05:56.715407 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:00.157) 0:00:32.208 ****** 2025-09-08 01:05:56.715418 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:05:56.715429 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:05:56.715439 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:05:56.715451 | orchestrator | 2025-09-08 01:05:56.715461 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:05:56.715472 | orchestrator | Monday 08 September 2025 01:03:15 +0000 (0:00:00.317) 0:00:32.526 ****** 2025-09-08 01:05:56.715483 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:05:56.715494 | orchestrator | 2025-09-08 01:05:56.715505 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-08 01:05:56.715515 | orchestrator | Monday 08 September 2025 01:03:16 +0000 (0:00:00.682) 0:00:33.208 ****** 2025-09-08 01:05:56.715533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.715583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.715595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.715607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.715815 | orchestrator | 2025-09-08 01:05:56.715826 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-08 01:05:56.715837 | orchestrator | Monday 08 September 2025 01:03:24 +0000 (0:00:07.675) 0:00:40.884 ****** 2025-09-08 01:05:56.715848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.715876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.715888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.715900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.715911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.715922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.715934 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:05:56.715945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.716269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.716298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716353 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:05:56.716364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.716384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.716401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716454 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:05:56.716465 | orchestrator | 2025-09-08 01:05:56.716476 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-08 01:05:56.716487 | orchestrator | Monday 08 September 2025 01:03:26 +0000 (0:00:02.259) 0:00:43.145 ****** 2025-09-08 01:05:56.716499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.716520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.716532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716645 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:05:56.716656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.716674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.716691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716742 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:05:56.716754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.716773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.716790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.716842 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:05:56.716853 | orchestrator | 2025-09-08 01:05:56.716864 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-08 01:05:56.716875 | orchestrator | Monday 08 September 2025 01:03:29 +0000 (0:00:03.136) 0:00:46.282 ****** 2025-09-08 01:05:56.716888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.716913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.716928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.716942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.716962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.716975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.716995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717190 | orchestrator | 2025-09-08 01:05:56.717203 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-08 01:05:56.717216 | orchestrator | Monday 08 September 2025 01:03:35 +0000 (0:00:06.242) 0:00:52.524 ****** 2025-09-08 01:05:56.717230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.717250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro'2025-09-08 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:05:56.717896 | orchestrator | , '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.717923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.717944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.717979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718195 | orchestrator | 2025-09-08 01:05:56.718227 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-08 01:05:56.718238 | orchestrator | Monday 08 September 2025 01:03:56 +0000 (0:00:20.863) 0:01:13.388 ****** 2025-09-08 01:05:56.718247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-08 01:05:56.718257 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-08 01:05:56.718267 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-08 01:05:56.718276 | orchestrator | 2025-09-08 01:05:56.718286 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-08 01:05:56.718295 | orchestrator | Monday 08 September 2025 01:04:05 +0000 (0:00:08.566) 0:01:21.955 ****** 2025-09-08 01:05:56.718304 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-08 01:05:56.718314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-08 01:05:56.718323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-08 01:05:56.718333 | orchestrator | 2025-09-08 01:05:56.718342 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-08 01:05:56.718351 | orchestrator | Monday 08 September 2025 01:04:08 +0000 (0:00:03.182) 0:01:25.137 ****** 2025-09-08 01:05:56.718368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.718389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.718400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.718411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718641 | orchestrator | 2025-09-08 01:05:56.718652 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-08 01:05:56.718664 | orchestrator | Monday 08 September 2025 01:04:11 +0000 (0:00:03.528) 0:01:28.665 ****** 2025-09-08 01:05:56.718681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.718707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.718720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.718732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.718903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.718933 | orchestrator | 2025-09-08 01:05:56.718943 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:05:56.718959 | orchestrator | Monday 08 September 2025 01:04:16 +0000 (0:00:04.240) 0:01:32.905 ****** 2025-09-08 01:05:56.718968 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:05:56.718978 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:05:56.718988 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:05:56.718997 | orchestrator | 2025-09-08 01:05:56.719007 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-08 01:05:56.719016 | orchestrator | Monday 08 September 2025 01:04:16 +0000 (0:00:00.761) 0:01:33.667 ****** 2025-09-08 01:05:56.719031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.719046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.719057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719105 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:05:56.719119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.719135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.719145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719191 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:05:56.719207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-08 01:05:56.719221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-08 01:05:56.719232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:05:56.719277 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:05:56.719287 | orchestrator | 2025-09-08 01:05:56.719297 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-08 01:05:56.719306 | orchestrator | Monday 08 September 2025 01:04:18 +0000 (0:00:01.837) 0:01:35.505 ****** 2025-09-08 01:05:56.719323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.719338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.719349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-08 01:05:56.719359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:05:56.719565 | orchestrator | 2025-09-08 01:05:56.719575 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-08 01:05:56.719585 | orchestrator | Monday 08 September 2025 01:04:24 +0000 (0:00:05.238) 0:01:40.743 ****** 2025-09-08 01:05:56.719594 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:05:56.719604 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:05:56.719614 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:05:56.719624 | orchestrator | 2025-09-08 01:05:56.719633 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-08 01:05:56.719643 | orchestrator | Monday 08 September 2025 01:04:24 +0000 (0:00:00.554) 0:01:41.298 ****** 2025-09-08 01:05:56.719653 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-08 01:05:56.719663 | orchestrator | 2025-09-08 01:05:56.719673 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-08 01:05:56.719683 | orchestrator | Monday 08 September 2025 01:04:26 +0000 (0:00:02.303) 0:01:43.602 ****** 2025-09-08 01:05:56.719692 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 01:05:56.719702 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-08 01:05:56.719712 | orchestrator | 2025-09-08 01:05:56.719721 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-08 01:05:56.719736 | orchestrator | Monday 08 September 2025 01:04:29 +0000 (0:00:02.801) 0:01:46.404 ****** 2025-09-08 01:05:56.719746 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.719755 | orchestrator | 2025-09-08 01:05:56.719765 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-08 01:05:56.719775 | orchestrator | Monday 08 September 2025 01:04:44 +0000 (0:00:15.217) 0:02:01.621 ****** 2025-09-08 01:05:56.719784 | orchestrator | 2025-09-08 01:05:56.719794 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-08 01:05:56.719803 | orchestrator | Monday 08 September 2025 01:04:44 +0000 (0:00:00.074) 0:02:01.696 ****** 2025-09-08 01:05:56.719813 | orchestrator | 2025-09-08 01:05:56.719822 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-08 01:05:56.719836 | orchestrator | Monday 08 September 2025 01:04:45 +0000 (0:00:00.068) 0:02:01.765 ****** 2025-09-08 01:05:56.719846 | orchestrator | 2025-09-08 01:05:56.719855 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-08 01:05:56.719865 | orchestrator | Monday 08 September 2025 01:04:45 +0000 (0:00:00.070) 0:02:01.835 ****** 2025-09-08 01:05:56.719874 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.719884 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:05:56.719900 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:05:56.719909 | orchestrator | 2025-09-08 01:05:56.719919 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-08 01:05:56.719929 | orchestrator | Monday 08 September 2025 01:04:59 +0000 (0:00:14.082) 0:02:15.918 ****** 2025-09-08 01:05:56.719938 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:05:56.719948 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.719957 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:05:56.719967 | orchestrator | 2025-09-08 01:05:56.719977 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-08 01:05:56.719986 | orchestrator | Monday 08 September 2025 01:05:12 +0000 (0:00:13.203) 0:02:29.122 ****** 2025-09-08 01:05:56.719996 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.720005 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:05:56.720015 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:05:56.720024 | orchestrator | 2025-09-08 01:05:56.720034 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-08 01:05:56.720044 | orchestrator | Monday 08 September 2025 01:05:25 +0000 (0:00:13.178) 0:02:42.301 ****** 2025-09-08 01:05:56.720053 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.720063 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:05:56.720072 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:05:56.720082 | orchestrator | 2025-09-08 01:05:56.720091 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-08 01:05:56.720101 | orchestrator | Monday 08 September 2025 01:05:32 +0000 (0:00:06.528) 0:02:48.829 ****** 2025-09-08 01:05:56.720111 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.720120 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:05:56.720130 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:05:56.720139 | orchestrator | 2025-09-08 01:05:56.720149 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-08 01:05:56.720158 | orchestrator | Monday 08 September 2025 01:05:39 +0000 (0:00:07.647) 0:02:56.477 ****** 2025-09-08 01:05:56.720168 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:05:56.720178 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:05:56.720187 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.720197 | orchestrator | 2025-09-08 01:05:56.720207 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-08 01:05:56.720216 | orchestrator | Monday 08 September 2025 01:05:48 +0000 (0:00:08.844) 0:03:05.321 ****** 2025-09-08 01:05:56.720226 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:05:56.720235 | orchestrator | 2025-09-08 01:05:56.720245 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:05:56.720255 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:05:56.720265 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:05:56.720275 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:05:56.720285 | orchestrator | 2025-09-08 01:05:56.720295 | orchestrator | 2025-09-08 01:05:56.720304 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:05:56.720314 | orchestrator | Monday 08 September 2025 01:05:55 +0000 (0:00:07.247) 0:03:12.569 ****** 2025-09-08 01:05:56.720324 | orchestrator | =============================================================================== 2025-09-08 01:05:56.720334 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.86s 2025-09-08 01:05:56.720343 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.22s 2025-09-08 01:05:56.720353 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.08s 2025-09-08 01:05:56.720362 | orchestrator | designate : Restart designate-api container ---------------------------- 13.20s 2025-09-08 01:05:56.720377 | orchestrator | designate : Restart designate-central container ------------------------ 13.18s 2025-09-08 01:05:56.720387 | orchestrator | designate : Restart designate-worker container -------------------------- 8.84s 2025-09-08 01:05:56.720397 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.57s 2025-09-08 01:05:56.720406 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.68s 2025-09-08 01:05:56.720416 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.65s 2025-09-08 01:05:56.720426 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.25s 2025-09-08 01:05:56.720469 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.70s 2025-09-08 01:05:56.720480 | orchestrator | designate : Restart designate-producer container ------------------------ 6.53s 2025-09-08 01:05:56.720489 | orchestrator | designate : Copying over config.json files for services ----------------- 6.24s 2025-09-08 01:05:56.720499 | orchestrator | designate : Check designate containers ---------------------------------- 5.24s 2025-09-08 01:05:56.720508 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.52s 2025-09-08 01:05:56.720518 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.24s 2025-09-08 01:05:56.720527 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.03s 2025-09-08 01:05:56.720541 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.81s 2025-09-08 01:05:56.720566 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.76s 2025-09-08 01:05:56.720577 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.62s 2025-09-08 01:05:59.765688 | orchestrator | 2025-09-08 01:05:59 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:05:59.766276 | orchestrator | 2025-09-08 01:05:59 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:05:59.767058 | orchestrator | 2025-09-08 01:05:59 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:05:59.767947 | orchestrator | 2025-09-08 01:05:59 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:05:59.768042 | orchestrator | 2025-09-08 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:02.803883 | orchestrator | 2025-09-08 01:06:02 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:02.804308 | orchestrator | 2025-09-08 01:06:02 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:02.807165 | orchestrator | 2025-09-08 01:06:02 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:02.808888 | orchestrator | 2025-09-08 01:06:02 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:02.808912 | orchestrator | 2025-09-08 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:05.852548 | orchestrator | 2025-09-08 01:06:05 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:05.854219 | orchestrator | 2025-09-08 01:06:05 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:05.856069 | orchestrator | 2025-09-08 01:06:05 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:05.858297 | orchestrator | 2025-09-08 01:06:05 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:05.858869 | orchestrator | 2025-09-08 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:08.911758 | orchestrator | 2025-09-08 01:06:08 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:08.912407 | orchestrator | 2025-09-08 01:06:08 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:08.915052 | orchestrator | 2025-09-08 01:06:08 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:08.915886 | orchestrator | 2025-09-08 01:06:08 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:08.915923 | orchestrator | 2025-09-08 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:11.973004 | orchestrator | 2025-09-08 01:06:11 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:11.975991 | orchestrator | 2025-09-08 01:06:11 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:11.976027 | orchestrator | 2025-09-08 01:06:11 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:11.976402 | orchestrator | 2025-09-08 01:06:11 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:11.976674 | orchestrator | 2025-09-08 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:15.033351 | orchestrator | 2025-09-08 01:06:15 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:15.036088 | orchestrator | 2025-09-08 01:06:15 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:15.038293 | orchestrator | 2025-09-08 01:06:15 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:15.040205 | orchestrator | 2025-09-08 01:06:15 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:15.040228 | orchestrator | 2025-09-08 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:18.084219 | orchestrator | 2025-09-08 01:06:18 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:18.085510 | orchestrator | 2025-09-08 01:06:18 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:18.089453 | orchestrator | 2025-09-08 01:06:18 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:18.092131 | orchestrator | 2025-09-08 01:06:18 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:18.092203 | orchestrator | 2025-09-08 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:21.138249 | orchestrator | 2025-09-08 01:06:21 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:21.139312 | orchestrator | 2025-09-08 01:06:21 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:21.139734 | orchestrator | 2025-09-08 01:06:21 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:21.140585 | orchestrator | 2025-09-08 01:06:21 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:21.140955 | orchestrator | 2025-09-08 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:24.186474 | orchestrator | 2025-09-08 01:06:24 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:24.190530 | orchestrator | 2025-09-08 01:06:24 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:24.190661 | orchestrator | 2025-09-08 01:06:24 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:24.190677 | orchestrator | 2025-09-08 01:06:24 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:24.190720 | orchestrator | 2025-09-08 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:27.236508 | orchestrator | 2025-09-08 01:06:27 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:27.236747 | orchestrator | 2025-09-08 01:06:27 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:27.237375 | orchestrator | 2025-09-08 01:06:27 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:27.237884 | orchestrator | 2025-09-08 01:06:27 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:27.237908 | orchestrator | 2025-09-08 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:30.287751 | orchestrator | 2025-09-08 01:06:30 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:30.288760 | orchestrator | 2025-09-08 01:06:30 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:30.292919 | orchestrator | 2025-09-08 01:06:30 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state STARTED 2025-09-08 01:06:30.293725 | orchestrator | 2025-09-08 01:06:30 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:30.293765 | orchestrator | 2025-09-08 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:33.343267 | orchestrator | 2025-09-08 01:06:33 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:33.345802 | orchestrator | 2025-09-08 01:06:33 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:33.352307 | orchestrator | 2025-09-08 01:06:33 | INFO  | Task c47f1336-cd72-4228-94b3-cbde4221bbb1 is in state SUCCESS 2025-09-08 01:06:33.354966 | orchestrator | 2025-09-08 01:06:33.355001 | orchestrator | 2025-09-08 01:06:33.355013 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:06:33.355026 | orchestrator | 2025-09-08 01:06:33.355037 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:06:33.355048 | orchestrator | Monday 08 September 2025 01:02:02 +0000 (0:00:00.279) 0:00:00.279 ****** 2025-09-08 01:06:33.355060 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:33.355073 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:33.355083 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:33.355094 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:33.355105 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:33.355116 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:33.355127 | orchestrator | 2025-09-08 01:06:33.355138 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:06:33.355149 | orchestrator | Monday 08 September 2025 01:02:02 +0000 (0:00:00.705) 0:00:00.984 ****** 2025-09-08 01:06:33.355160 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-08 01:06:33.355172 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-08 01:06:33.355183 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-08 01:06:33.355194 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-08 01:06:33.355204 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-08 01:06:33.355215 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-08 01:06:33.355226 | orchestrator | 2025-09-08 01:06:33.355237 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-08 01:06:33.355248 | orchestrator | 2025-09-08 01:06:33.355259 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:33.355270 | orchestrator | Monday 08 September 2025 01:02:03 +0000 (0:00:00.698) 0:00:01.683 ****** 2025-09-08 01:06:33.355301 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:06:33.355335 | orchestrator | 2025-09-08 01:06:33.355346 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-08 01:06:33.355357 | orchestrator | Monday 08 September 2025 01:02:04 +0000 (0:00:01.317) 0:00:03.000 ****** 2025-09-08 01:06:33.355368 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:33.355379 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:33.355390 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:33.355401 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:33.355588 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:33.355602 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:33.355635 | orchestrator | 2025-09-08 01:06:33.355648 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-08 01:06:33.355662 | orchestrator | Monday 08 September 2025 01:02:06 +0000 (0:00:01.562) 0:00:04.563 ****** 2025-09-08 01:06:33.355675 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:33.355687 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:33.355700 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:33.355713 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:33.355726 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:33.355738 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:33.355751 | orchestrator | 2025-09-08 01:06:33.355764 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-08 01:06:33.355777 | orchestrator | Monday 08 September 2025 01:02:07 +0000 (0:00:01.204) 0:00:05.768 ****** 2025-09-08 01:06:33.355791 | orchestrator | ok: [testbed-node-0] => { 2025-09-08 01:06:33.355804 | orchestrator |  "changed": false, 2025-09-08 01:06:33.355817 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:33.355831 | orchestrator | } 2025-09-08 01:06:33.355844 | orchestrator | ok: [testbed-node-1] => { 2025-09-08 01:06:33.355856 | orchestrator |  "changed": false, 2025-09-08 01:06:33.355869 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:33.355883 | orchestrator | } 2025-09-08 01:06:33.355896 | orchestrator | ok: [testbed-node-2] => { 2025-09-08 01:06:33.355909 | orchestrator |  "changed": false, 2025-09-08 01:06:33.355922 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:33.355933 | orchestrator | } 2025-09-08 01:06:33.355944 | orchestrator | ok: [testbed-node-3] => { 2025-09-08 01:06:33.355955 | orchestrator |  "changed": false, 2025-09-08 01:06:33.355966 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:33.355976 | orchestrator | } 2025-09-08 01:06:33.355987 | orchestrator | ok: [testbed-node-4] => { 2025-09-08 01:06:33.355998 | orchestrator |  "changed": false, 2025-09-08 01:06:33.356009 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:33.356019 | orchestrator | } 2025-09-08 01:06:33.356030 | orchestrator | ok: [testbed-node-5] => { 2025-09-08 01:06:33.356041 | orchestrator |  "changed": false, 2025-09-08 01:06:33.356064 | orchestrator |  "msg": "All assertions passed" 2025-09-08 01:06:33.356076 | orchestrator | } 2025-09-08 01:06:33.356087 | orchestrator | 2025-09-08 01:06:33.356099 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-08 01:06:33.356121 | orchestrator | Monday 08 September 2025 01:02:08 +0000 (0:00:00.865) 0:00:06.634 ****** 2025-09-08 01:06:33.356133 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.356144 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.356154 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.356169 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.356187 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.356205 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.356223 | orchestrator | 2025-09-08 01:06:33.356240 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-08 01:06:33.356257 | orchestrator | Monday 08 September 2025 01:02:09 +0000 (0:00:00.585) 0:00:07.219 ****** 2025-09-08 01:06:33.356275 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-08 01:06:33.356294 | orchestrator | 2025-09-08 01:06:33.356312 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-08 01:06:33.356345 | orchestrator | Monday 08 September 2025 01:02:12 +0000 (0:00:03.420) 0:00:10.640 ****** 2025-09-08 01:06:33.356364 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-08 01:06:33.356380 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-08 01:06:33.356390 | orchestrator | 2025-09-08 01:06:33.356430 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-08 01:06:33.356442 | orchestrator | Monday 08 September 2025 01:02:19 +0000 (0:00:06.462) 0:00:17.102 ****** 2025-09-08 01:06:33.356453 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:06:33.356464 | orchestrator | 2025-09-08 01:06:33.356475 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-08 01:06:33.356486 | orchestrator | Monday 08 September 2025 01:02:22 +0000 (0:00:03.685) 0:00:20.787 ****** 2025-09-08 01:06:33.356496 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:06:33.356507 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-08 01:06:33.356518 | orchestrator | 2025-09-08 01:06:33.356529 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-08 01:06:33.356540 | orchestrator | Monday 08 September 2025 01:02:26 +0000 (0:00:03.612) 0:00:24.400 ****** 2025-09-08 01:06:33.356550 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:06:33.356561 | orchestrator | 2025-09-08 01:06:33.356571 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-08 01:06:33.356582 | orchestrator | Monday 08 September 2025 01:02:29 +0000 (0:00:03.412) 0:00:27.813 ****** 2025-09-08 01:06:33.356593 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-08 01:06:33.356603 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-08 01:06:33.356635 | orchestrator | 2025-09-08 01:06:33.356646 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:33.356657 | orchestrator | Monday 08 September 2025 01:02:37 +0000 (0:00:07.942) 0:00:35.755 ****** 2025-09-08 01:06:33.356667 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.356678 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.356689 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.356708 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.356719 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.356730 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.356740 | orchestrator | 2025-09-08 01:06:33.356751 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-08 01:06:33.356762 | orchestrator | Monday 08 September 2025 01:02:38 +0000 (0:00:00.878) 0:00:36.633 ****** 2025-09-08 01:06:33.356773 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.356783 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.356794 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.356805 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.356815 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.356826 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.356836 | orchestrator | 2025-09-08 01:06:33.356847 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-08 01:06:33.356858 | orchestrator | Monday 08 September 2025 01:02:40 +0000 (0:00:02.408) 0:00:39.042 ****** 2025-09-08 01:06:33.356869 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:06:33.356880 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:06:33.356890 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:06:33.356901 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:06:33.356912 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:06:33.356923 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:06:33.356933 | orchestrator | 2025-09-08 01:06:33.356944 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-08 01:06:33.356955 | orchestrator | Monday 08 September 2025 01:02:42 +0000 (0:00:01.074) 0:00:40.117 ****** 2025-09-08 01:06:33.356975 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.356986 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.356997 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.357007 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.357018 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.357029 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.357039 | orchestrator | 2025-09-08 01:06:33.357050 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-08 01:06:33.357061 | orchestrator | Monday 08 September 2025 01:02:44 +0000 (0:00:02.377) 0:00:42.494 ****** 2025-09-08 01:06:33.357075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.357100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.357113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.357131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.357150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.357162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.357173 | orchestrator | 2025-09-08 01:06:33.357185 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-08 01:06:33.357196 | orchestrator | Monday 08 September 2025 01:02:46 +0000 (0:00:02.574) 0:00:45.068 ****** 2025-09-08 01:06:33.357207 | orchestrator | [WARNING]: Skipped 2025-09-08 01:06:33.357218 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-08 01:06:33.357229 | orchestrator | due to this access issue: 2025-09-08 01:06:33.357240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-08 01:06:33.357251 | orchestrator | a directory 2025-09-08 01:06:33.357262 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:06:33.357273 | orchestrator | 2025-09-08 01:06:33.357289 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:33.357300 | orchestrator | Monday 08 September 2025 01:02:47 +0000 (0:00:00.811) 0:00:45.880 ****** 2025-09-08 01:06:33.357311 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:06:33.357323 | orchestrator | 2025-09-08 01:06:33.357334 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-08 01:06:33.357345 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:01.292) 0:00:47.172 ****** 2025-09-08 01:06:33.357361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.357382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.357394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.357405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.357423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.357440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.357459 | orchestrator | 2025-09-08 01:06:33.357470 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-08 01:06:33.357481 | orchestrator | Monday 08 September 2025 01:02:52 +0000 (0:00:03.609) 0:00:50.782 ****** 2025-09-08 01:06:33.357493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.357505 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.357516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.357528 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.357545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.357557 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.357568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.357585 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.357602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.357654 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.357667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.357678 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.357689 | orchestrator | 2025-09-08 01:06:33.357700 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-08 01:06:33.357711 | orchestrator | Monday 08 September 2025 01:02:55 +0000 (0:00:02.973) 0:00:53.756 ****** 2025-09-08 01:06:33.357723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.357734 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.357755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.357773 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.357796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.357808 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.357819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.357830 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.357841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.357853 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.357864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.357876 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.357887 | orchestrator | 2025-09-08 01:06:33.357898 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-08 01:06:33.357914 | orchestrator | Monday 08 September 2025 01:02:59 +0000 (0:00:03.457) 0:00:57.213 ****** 2025-09-08 01:06:33.357925 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.357936 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.357953 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.357964 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.357975 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.357986 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.357997 | orchestrator | 2025-09-08 01:06:33.358008 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-08 01:06:33.358067 | orchestrator | Monday 08 September 2025 01:03:01 +0000 (0:00:02.428) 0:00:59.641 ****** 2025-09-08 01:06:33.358081 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.358092 | orchestrator | 2025-09-08 01:06:33.358103 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-08 01:06:33.358114 | orchestrator | Monday 08 September 2025 01:03:01 +0000 (0:00:00.122) 0:00:59.764 ****** 2025-09-08 01:06:33.358125 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.358135 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.358147 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.358157 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.358168 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.358179 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.358190 | orchestrator | 2025-09-08 01:06:33.358201 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-08 01:06:33.358212 | orchestrator | Monday 08 September 2025 01:03:02 +0000 (0:00:00.886) 0:01:00.651 ****** 2025-09-08 01:06:33.358229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.358241 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.358253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.358265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.358283 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.358294 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.358813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.358903 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.358938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.358952 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.358964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.358976 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.358987 | orchestrator | 2025-09-08 01:06:33.359000 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-08 01:06:33.359013 | orchestrator | Monday 08 September 2025 01:03:05 +0000 (0:00:02.791) 0:01:03.442 ****** 2025-09-08 01:06:33.359025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.359110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.359122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.359152 | orchestrator | 2025-09-08 01:06:33.359164 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-08 01:06:33.359175 | orchestrator | Monday 08 September 2025 01:03:09 +0000 (0:00:03.893) 0:01:07.336 ****** 2025-09-08 01:06:33.359193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.359252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.359269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.359281 | orchestrator | 2025-09-08 01:06:33.359292 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-08 01:06:33.359303 | orchestrator | Monday 08 September 2025 01:03:16 +0000 (0:00:07.468) 0:01:14.805 ****** 2025-09-08 01:06:33.359315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.359326 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.359342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.359356 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.359369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.359403 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.359424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359457 | orchestrator | 2025-09-08 01:06:33.359471 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-08 01:06:33.359484 | orchestrator | Monday 08 September 2025 01:03:21 +0000 (0:00:04.872) 0:01:19.678 ****** 2025-09-08 01:06:33.359497 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.359510 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.359523 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.359536 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:33.359549 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:33.359562 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:33.359576 | orchestrator | 2025-09-08 01:06:33.359588 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-08 01:06:33.359601 | orchestrator | Monday 08 September 2025 01:03:24 +0000 (0:00:03.203) 0:01:22.882 ****** 2025-09-08 01:06:33.359638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.359658 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.359671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.359684 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.359707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.359718 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.359730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.359778 | orchestrator | 2025-09-08 01:06:33.359789 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-08 01:06:33.359800 | orchestrator | Monday 08 September 2025 01:03:30 +0000 (0:00:05.933) 0:01:28.815 ****** 2025-09-08 01:06:33.359811 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.359822 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.359833 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.359844 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.359854 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.359865 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.359876 | orchestrator | 2025-09-08 01:06:33.359887 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-08 01:06:33.359898 | orchestrator | Monday 08 September 2025 01:03:33 +0000 (0:00:02.648) 0:01:31.464 ****** 2025-09-08 01:06:33.359909 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.359920 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.359931 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.359941 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.359952 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.359963 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.359974 | orchestrator | 2025-09-08 01:06:33.359985 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-08 01:06:33.359996 | orchestrator | Monday 08 September 2025 01:03:36 +0000 (0:00:02.668) 0:01:34.133 ****** 2025-09-08 01:06:33.360007 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.360018 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.360029 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.360045 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.360056 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.360067 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.360078 | orchestrator | 2025-09-08 01:06:33.360089 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-08 01:06:33.360100 | orchestrator | Monday 08 September 2025 01:03:39 +0000 (0:00:03.128) 0:01:37.262 ****** 2025-09-08 01:06:33.360111 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.360122 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.360133 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.360143 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.360154 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.360165 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.360176 | orchestrator | 2025-09-08 01:06:33.360188 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-08 01:06:33.360199 | orchestrator | Monday 08 September 2025 01:03:43 +0000 (0:00:03.934) 0:01:41.196 ****** 2025-09-08 01:06:33.360216 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.360227 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.360237 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.360248 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.360259 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.360270 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.360280 | orchestrator | 2025-09-08 01:06:33.360291 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-08 01:06:33.360302 | orchestrator | Monday 08 September 2025 01:03:45 +0000 (0:00:02.560) 0:01:43.757 ****** 2025-09-08 01:06:33.360313 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.360324 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.360334 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.360345 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.360356 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.360367 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.360378 | orchestrator | 2025-09-08 01:06:33.360393 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-08 01:06:33.360405 | orchestrator | Monday 08 September 2025 01:03:48 +0000 (0:00:03.058) 0:01:46.815 ****** 2025-09-08 01:06:33.360416 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:33.360427 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.360439 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:33.360450 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.360460 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:33.360472 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.360482 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:33.360493 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.360504 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:33.360515 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.360526 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-08 01:06:33.360536 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.360547 | orchestrator | 2025-09-08 01:06:33.360558 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-08 01:06:33.360569 | orchestrator | Monday 08 September 2025 01:03:52 +0000 (0:00:03.299) 0:01:50.115 ****** 2025-09-08 01:06:33.360581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.360593 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.360637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.360664 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.360675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.360687 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.360703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.360715 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.360727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.360738 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.360750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.360769 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.360780 | orchestrator | 2025-09-08 01:06:33.360791 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-08 01:06:33.360801 | orchestrator | Monday 08 September 2025 01:03:55 +0000 (0:00:03.498) 0:01:53.614 ****** 2025-09-08 01:06:33.360819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.360831 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.360846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.360858 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.360869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.360881 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.360892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.360911 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.360922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.360934 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.360951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.360963 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.360974 | orchestrator | 2025-09-08 01:06:33.360985 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-08 01:06:33.360996 | orchestrator | Monday 08 September 2025 01:03:59 +0000 (0:00:04.363) 0:01:57.977 ****** 2025-09-08 01:06:33.361007 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361018 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361029 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361039 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361050 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361061 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361072 | orchestrator | 2025-09-08 01:06:33.361083 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-08 01:06:33.361098 | orchestrator | Monday 08 September 2025 01:04:04 +0000 (0:00:04.441) 0:02:02.419 ****** 2025-09-08 01:06:33.361109 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361120 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361131 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361142 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:06:33.361153 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:06:33.361163 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:06:33.361174 | orchestrator | 2025-09-08 01:06:33.361185 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-09-08 01:06:33.361196 | orchestrator | Monday 08 September 2025 01:04:09 +0000 (0:00:04.863) 0:02:07.282 ****** 2025-09-08 01:06:33.361207 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361218 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361229 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361240 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361250 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361261 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361272 | orchestrator | 2025-09-08 01:06:33.361283 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-08 01:06:33.361294 | orchestrator | Monday 08 September 2025 01:04:13 +0000 (0:00:04.000) 0:02:11.283 ****** 2025-09-08 01:06:33.361305 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361326 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361337 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361348 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361358 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361369 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361380 | orchestrator | 2025-09-08 01:06:33.361391 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-08 01:06:33.361402 | orchestrator | Monday 08 September 2025 01:04:16 +0000 (0:00:03.409) 0:02:14.692 ****** 2025-09-08 01:06:33.361413 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361424 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361434 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361445 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361456 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361467 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361477 | orchestrator | 2025-09-08 01:06:33.361488 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-08 01:06:33.361499 | orchestrator | Monday 08 September 2025 01:04:19 +0000 (0:00:03.286) 0:02:17.978 ****** 2025-09-08 01:06:33.361510 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361521 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361532 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361543 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361553 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361564 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361575 | orchestrator | 2025-09-08 01:06:33.361586 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-08 01:06:33.361597 | orchestrator | Monday 08 September 2025 01:04:22 +0000 (0:00:02.557) 0:02:20.536 ****** 2025-09-08 01:06:33.361669 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361683 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361695 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361706 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361717 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361728 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361739 | orchestrator | 2025-09-08 01:06:33.361749 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-08 01:06:33.361759 | orchestrator | Monday 08 September 2025 01:04:25 +0000 (0:00:02.551) 0:02:23.087 ****** 2025-09-08 01:06:33.361769 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361779 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361788 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361798 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361808 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361817 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361827 | orchestrator | 2025-09-08 01:06:33.361837 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-08 01:06:33.361847 | orchestrator | Monday 08 September 2025 01:04:27 +0000 (0:00:02.587) 0:02:25.675 ****** 2025-09-08 01:06:33.361857 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361872 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361882 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361892 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.361902 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.361911 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361921 | orchestrator | 2025-09-08 01:06:33.361930 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-08 01:06:33.361940 | orchestrator | Monday 08 September 2025 01:04:30 +0000 (0:00:03.302) 0:02:28.977 ****** 2025-09-08 01:06:33.361950 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.361960 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.361970 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.361986 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.361996 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.362006 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.362057 | orchestrator | 2025-09-08 01:06:33.362070 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-08 01:06:33.362081 | orchestrator | Monday 08 September 2025 01:04:33 +0000 (0:00:02.586) 0:02:31.563 ****** 2025-09-08 01:06:33.362091 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:33.362101 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.362111 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:33.362121 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.362131 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:33.362141 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.362156 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:33.362166 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.362176 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:33.362186 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.362195 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-08 01:06:33.362205 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.362215 | orchestrator | 2025-09-08 01:06:33.362225 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-08 01:06:33.362235 | orchestrator | Monday 08 September 2025 01:04:36 +0000 (0:00:02.858) 0:02:34.421 ****** 2025-09-08 01:06:33.362245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.362256 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.362267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.362277 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.362295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-08 01:06:33.362313 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.362328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.362338 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.362348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.362359 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.362368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-08 01:06:33.362379 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.362389 | orchestrator | 2025-09-08 01:06:33.362398 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-08 01:06:33.362408 | orchestrator | Monday 08 September 2025 01:04:38 +0000 (0:00:02.379) 0:02:36.801 ****** 2025-09-08 01:06:33.362418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.362440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.362456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.362467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.362477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-08 01:06:33.362488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-08 01:06:33.362504 | orchestrator | 2025-09-08 01:06:33.362514 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-08 01:06:33.362529 | orchestrator | Monday 08 September 2025 01:04:41 +0000 (0:00:02.984) 0:02:39.786 ****** 2025-09-08 01:06:33.362539 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:06:33.362549 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:06:33.362559 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:06:33.362569 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:06:33.362578 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:06:33.362588 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:06:33.362598 | orchestrator | 2025-09-08 01:06:33.362622 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-08 01:06:33.362632 | orchestrator | Monday 08 September 2025 01:04:42 +0000 (0:00:00.782) 0:02:40.569 ****** 2025-09-08 01:06:33.362642 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:33.362652 | orchestrator | 2025-09-08 01:06:33.362661 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-08 01:06:33.362671 | orchestrator | Monday 08 September 2025 01:04:44 +0000 (0:00:02.053) 0:02:42.623 ****** 2025-09-08 01:06:33.362681 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:33.362690 | orchestrator | 2025-09-08 01:06:33.362700 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-08 01:06:33.362710 | orchestrator | Monday 08 September 2025 01:04:46 +0000 (0:00:02.346) 0:02:44.969 ****** 2025-09-08 01:06:33.362720 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:33.362729 | orchestrator | 2025-09-08 01:06:33.362739 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:33.362749 | orchestrator | Monday 08 September 2025 01:05:31 +0000 (0:00:44.716) 0:03:29.686 ****** 2025-09-08 01:06:33.362759 | orchestrator | 2025-09-08 01:06:33.362768 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:33.362778 | orchestrator | Monday 08 September 2025 01:05:31 +0000 (0:00:00.076) 0:03:29.763 ****** 2025-09-08 01:06:33.362788 | orchestrator | 2025-09-08 01:06:33.362802 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:33.362812 | orchestrator | Monday 08 September 2025 01:05:31 +0000 (0:00:00.066) 0:03:29.830 ****** 2025-09-08 01:06:33.362822 | orchestrator | 2025-09-08 01:06:33.362832 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:33.362842 | orchestrator | Monday 08 September 2025 01:05:31 +0000 (0:00:00.084) 0:03:29.914 ****** 2025-09-08 01:06:33.362851 | orchestrator | 2025-09-08 01:06:33.362861 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:33.362871 | orchestrator | Monday 08 September 2025 01:05:32 +0000 (0:00:00.255) 0:03:30.169 ****** 2025-09-08 01:06:33.362880 | orchestrator | 2025-09-08 01:06:33.362890 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-08 01:06:33.362900 | orchestrator | Monday 08 September 2025 01:05:32 +0000 (0:00:00.102) 0:03:30.272 ****** 2025-09-08 01:06:33.362910 | orchestrator | 2025-09-08 01:06:33.362919 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-08 01:06:33.362929 | orchestrator | Monday 08 September 2025 01:05:32 +0000 (0:00:00.064) 0:03:30.337 ****** 2025-09-08 01:06:33.362939 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:06:33.362949 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:06:33.362968 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:06:33.362978 | orchestrator | 2025-09-08 01:06:33.362988 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-08 01:06:33.362998 | orchestrator | Monday 08 September 2025 01:06:05 +0000 (0:00:32.886) 0:04:03.224 ****** 2025-09-08 01:06:33.363007 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:06:33.363017 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:06:33.363027 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:06:33.363037 | orchestrator | 2025-09-08 01:06:33.363046 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:06:33.363056 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-08 01:06:33.363067 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-08 01:06:33.363077 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-08 01:06:33.363087 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-08 01:06:33.363097 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-08 01:06:33.363107 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-08 01:06:33.363117 | orchestrator | 2025-09-08 01:06:33.363126 | orchestrator | 2025-09-08 01:06:33.363136 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:06:33.363146 | orchestrator | Monday 08 September 2025 01:06:31 +0000 (0:00:26.637) 0:04:29.862 ****** 2025-09-08 01:06:33.363156 | orchestrator | =============================================================================== 2025-09-08 01:06:33.363166 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.71s 2025-09-08 01:06:33.363176 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.89s 2025-09-08 01:06:33.363186 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 26.64s 2025-09-08 01:06:33.363196 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.94s 2025-09-08 01:06:33.363210 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.47s 2025-09-08 01:06:33.363220 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.46s 2025-09-08 01:06:33.363230 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.93s 2025-09-08 01:06:33.363240 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.87s 2025-09-08 01:06:33.363250 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.86s 2025-09-08 01:06:33.363259 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.44s 2025-09-08 01:06:33.363269 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.36s 2025-09-08 01:06:33.363279 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 4.00s 2025-09-08 01:06:33.363289 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 3.93s 2025-09-08 01:06:33.363298 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.89s 2025-09-08 01:06:33.363308 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.69s 2025-09-08 01:06:33.363318 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.61s 2025-09-08 01:06:33.363327 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.61s 2025-09-08 01:06:33.363337 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 3.50s 2025-09-08 01:06:33.363352 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.46s 2025-09-08 01:06:33.363362 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.42s 2025-09-08 01:06:33.363377 | orchestrator | 2025-09-08 01:06:33 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:33.363387 | orchestrator | 2025-09-08 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:36.401940 | orchestrator | 2025-09-08 01:06:36 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:36.404028 | orchestrator | 2025-09-08 01:06:36 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:36.404061 | orchestrator | 2025-09-08 01:06:36 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:36.404073 | orchestrator | 2025-09-08 01:06:36 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:36.404335 | orchestrator | 2025-09-08 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:39.446116 | orchestrator | 2025-09-08 01:06:39 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:39.448343 | orchestrator | 2025-09-08 01:06:39 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:39.450587 | orchestrator | 2025-09-08 01:06:39 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:39.453447 | orchestrator | 2025-09-08 01:06:39 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:39.454942 | orchestrator | 2025-09-08 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:42.495403 | orchestrator | 2025-09-08 01:06:42 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:42.497683 | orchestrator | 2025-09-08 01:06:42 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:42.500791 | orchestrator | 2025-09-08 01:06:42 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:42.503256 | orchestrator | 2025-09-08 01:06:42 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:42.504045 | orchestrator | 2025-09-08 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:45.553061 | orchestrator | 2025-09-08 01:06:45 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:45.554685 | orchestrator | 2025-09-08 01:06:45 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:45.556124 | orchestrator | 2025-09-08 01:06:45 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:45.558145 | orchestrator | 2025-09-08 01:06:45 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:45.558173 | orchestrator | 2025-09-08 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:48.606005 | orchestrator | 2025-09-08 01:06:48 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:48.607707 | orchestrator | 2025-09-08 01:06:48 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:48.609750 | orchestrator | 2025-09-08 01:06:48 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:48.611463 | orchestrator | 2025-09-08 01:06:48 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:48.611497 | orchestrator | 2025-09-08 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:51.656902 | orchestrator | 2025-09-08 01:06:51 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:51.657826 | orchestrator | 2025-09-08 01:06:51 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:51.659985 | orchestrator | 2025-09-08 01:06:51 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:51.661574 | orchestrator | 2025-09-08 01:06:51 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:51.661881 | orchestrator | 2025-09-08 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:54.712365 | orchestrator | 2025-09-08 01:06:54 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:54.713896 | orchestrator | 2025-09-08 01:06:54 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:54.716304 | orchestrator | 2025-09-08 01:06:54 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:54.719149 | orchestrator | 2025-09-08 01:06:54 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:54.719237 | orchestrator | 2025-09-08 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:06:57.765146 | orchestrator | 2025-09-08 01:06:57 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:06:57.767817 | orchestrator | 2025-09-08 01:06:57 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:06:57.770302 | orchestrator | 2025-09-08 01:06:57 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:06:57.773190 | orchestrator | 2025-09-08 01:06:57 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:06:57.773288 | orchestrator | 2025-09-08 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:00.826314 | orchestrator | 2025-09-08 01:07:00 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:00.827177 | orchestrator | 2025-09-08 01:07:00 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:07:00.828875 | orchestrator | 2025-09-08 01:07:00 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:00.830801 | orchestrator | 2025-09-08 01:07:00 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:00.830954 | orchestrator | 2025-09-08 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:03.878616 | orchestrator | 2025-09-08 01:07:03 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:03.882745 | orchestrator | 2025-09-08 01:07:03 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:07:03.882974 | orchestrator | 2025-09-08 01:07:03 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:03.886179 | orchestrator | 2025-09-08 01:07:03 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:03.886205 | orchestrator | 2025-09-08 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:06.944724 | orchestrator | 2025-09-08 01:07:06 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:06.947831 | orchestrator | 2025-09-08 01:07:06 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state STARTED 2025-09-08 01:07:06.951418 | orchestrator | 2025-09-08 01:07:06 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:06.953905 | orchestrator | 2025-09-08 01:07:06 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:06.954112 | orchestrator | 2025-09-08 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:10.022804 | orchestrator | 2025-09-08 01:07:10 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:10.025344 | orchestrator | 2025-09-08 01:07:10 | INFO  | Task e81263f9-269d-413c-a46e-7ab349f1266b is in state SUCCESS 2025-09-08 01:07:10.027019 | orchestrator | 2025-09-08 01:07:10.027067 | orchestrator | 2025-09-08 01:07:10.027080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:07:10.027093 | orchestrator | 2025-09-08 01:07:10.027105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:07:10.027116 | orchestrator | Monday 08 September 2025 01:06:00 +0000 (0:00:00.343) 0:00:00.343 ****** 2025-09-08 01:07:10.027128 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:07:10.027140 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:07:10.027151 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:07:10.027162 | orchestrator | 2025-09-08 01:07:10.027173 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:07:10.027184 | orchestrator | Monday 08 September 2025 01:06:00 +0000 (0:00:00.362) 0:00:00.705 ****** 2025-09-08 01:07:10.027195 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-08 01:07:10.027207 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-08 01:07:10.027218 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-08 01:07:10.027229 | orchestrator | 2025-09-08 01:07:10.027240 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-08 01:07:10.027251 | orchestrator | 2025-09-08 01:07:10.027262 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-08 01:07:10.027272 | orchestrator | Monday 08 September 2025 01:06:01 +0000 (0:00:00.474) 0:00:01.180 ****** 2025-09-08 01:07:10.027284 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:07:10.027296 | orchestrator | 2025-09-08 01:07:10.027306 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-08 01:07:10.027317 | orchestrator | Monday 08 September 2025 01:06:01 +0000 (0:00:00.630) 0:00:01.811 ****** 2025-09-08 01:07:10.027328 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-08 01:07:10.027339 | orchestrator | 2025-09-08 01:07:10.027350 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-08 01:07:10.027379 | orchestrator | Monday 08 September 2025 01:06:05 +0000 (0:00:03.506) 0:00:05.318 ****** 2025-09-08 01:07:10.027391 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-08 01:07:10.027402 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-08 01:07:10.027413 | orchestrator | 2025-09-08 01:07:10.027424 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-08 01:07:10.027434 | orchestrator | Monday 08 September 2025 01:06:12 +0000 (0:00:07.184) 0:00:12.502 ****** 2025-09-08 01:07:10.027445 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:07:10.027456 | orchestrator | 2025-09-08 01:07:10.027467 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-08 01:07:10.027478 | orchestrator | Monday 08 September 2025 01:06:15 +0000 (0:00:03.348) 0:00:15.851 ****** 2025-09-08 01:07:10.027488 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:07:10.027499 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-08 01:07:10.027510 | orchestrator | 2025-09-08 01:07:10.027521 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-08 01:07:10.027532 | orchestrator | Monday 08 September 2025 01:06:19 +0000 (0:00:03.567) 0:00:19.419 ****** 2025-09-08 01:07:10.027570 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:07:10.027582 | orchestrator | 2025-09-08 01:07:10.027595 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-08 01:07:10.027609 | orchestrator | Monday 08 September 2025 01:06:22 +0000 (0:00:03.403) 0:00:22.823 ****** 2025-09-08 01:07:10.027621 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-08 01:07:10.027634 | orchestrator | 2025-09-08 01:07:10.027647 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-08 01:07:10.027661 | orchestrator | Monday 08 September 2025 01:06:27 +0000 (0:00:04.299) 0:00:27.122 ****** 2025-09-08 01:07:10.027697 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:10.027711 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:10.027724 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:10.027737 | orchestrator | 2025-09-08 01:07:10.027750 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-08 01:07:10.027763 | orchestrator | Monday 08 September 2025 01:06:27 +0000 (0:00:00.275) 0:00:27.398 ****** 2025-09-08 01:07:10.027781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.027815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.027836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.027850 | orchestrator | 2025-09-08 01:07:10.027863 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-08 01:07:10.027884 | orchestrator | Monday 08 September 2025 01:06:28 +0000 (0:00:00.979) 0:00:28.378 ****** 2025-09-08 01:07:10.027898 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:10.027910 | orchestrator | 2025-09-08 01:07:10.027923 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-08 01:07:10.027937 | orchestrator | Monday 08 September 2025 01:06:28 +0000 (0:00:00.131) 0:00:28.509 ****** 2025-09-08 01:07:10.027948 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:10.027959 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:10.027970 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:10.027980 | orchestrator | 2025-09-08 01:07:10.027991 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-08 01:07:10.028002 | orchestrator | Monday 08 September 2025 01:06:29 +0000 (0:00:00.533) 0:00:29.043 ****** 2025-09-08 01:07:10.028013 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:07:10.028024 | orchestrator | 2025-09-08 01:07:10.028034 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-08 01:07:10.028045 | orchestrator | Monday 08 September 2025 01:06:29 +0000 (0:00:00.598) 0:00:29.642 ****** 2025-09-08 01:07:10.028056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028108 | orchestrator | 2025-09-08 01:07:10.028120 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-08 01:07:10.028135 | orchestrator | Monday 08 September 2025 01:06:31 +0000 (0:00:01.847) 0:00:31.489 ****** 2025-09-08 01:07:10.028147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028159 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:10.028171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028183 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:10.028202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028214 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:10.028225 | orchestrator | 2025-09-08 01:07:10.028236 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-08 01:07:10.028247 | orchestrator | Monday 08 September 2025 01:06:32 +0000 (0:00:01.071) 0:00:32.561 ****** 2025-09-08 01:07:10.028259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028277 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:10.028294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028306 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:10.028317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028328 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:10.028339 | orchestrator | 2025-09-08 01:07:10.028350 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-08 01:07:10.028361 | orchestrator | Monday 08 September 2025 01:06:33 +0000 (0:00:00.810) 0:00:33.372 ****** 2025-09-08 01:07:10.028380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028436 | orchestrator | 2025-09-08 01:07:10.028447 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-08 01:07:10.028458 | orchestrator | Monday 08 September 2025 01:06:35 +0000 (0:00:01.613) 0:00:34.985 ****** 2025-09-08 01:07:10.028469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028644 | orchestrator | 2025-09-08 01:07:10.028655 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-08 01:07:10.028687 | orchestrator | Monday 08 September 2025 01:06:37 +0000 (0:00:02.631) 0:00:37.617 ****** 2025-09-08 01:07:10.028699 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-08 01:07:10.028710 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-08 01:07:10.028721 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-08 01:07:10.028732 | orchestrator | 2025-09-08 01:07:10.028749 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-08 01:07:10.028760 | orchestrator | Monday 08 September 2025 01:06:39 +0000 (0:00:01.628) 0:00:39.245 ****** 2025-09-08 01:07:10.028771 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:10.028782 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:07:10.028793 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:07:10.028804 | orchestrator | 2025-09-08 01:07:10.028814 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-08 01:07:10.028826 | orchestrator | Monday 08 September 2025 01:06:40 +0000 (0:00:01.485) 0:00:40.731 ****** 2025-09-08 01:07:10.028837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028849 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:07:10.028860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028871 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:07:10.028891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-08 01:07:10.028910 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:07:10.028921 | orchestrator | 2025-09-08 01:07:10.028932 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-08 01:07:10.028942 | orchestrator | Monday 08 September 2025 01:06:41 +0000 (0:00:00.478) 0:00:41.209 ****** 2025-09-08 01:07:10.028959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-08 01:07:10.028994 | orchestrator | 2025-09-08 01:07:10.029005 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-08 01:07:10.029023 | orchestrator | Monday 08 September 2025 01:06:42 +0000 (0:00:01.599) 0:00:42.809 ****** 2025-09-08 01:07:10.029034 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:10.029044 | orchestrator | 2025-09-08 01:07:10.029055 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-08 01:07:10.029066 | orchestrator | Monday 08 September 2025 01:06:45 +0000 (0:00:02.258) 0:00:45.068 ****** 2025-09-08 01:07:10.029077 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:10.029088 | orchestrator | 2025-09-08 01:07:10.029098 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-08 01:07:10.029109 | orchestrator | Monday 08 September 2025 01:06:47 +0000 (0:00:02.545) 0:00:47.614 ****** 2025-09-08 01:07:10.029126 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:10.029138 | orchestrator | 2025-09-08 01:07:10.029148 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-08 01:07:10.029159 | orchestrator | Monday 08 September 2025 01:07:00 +0000 (0:00:12.916) 0:01:00.530 ****** 2025-09-08 01:07:10.029170 | orchestrator | 2025-09-08 01:07:10.029181 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-08 01:07:10.029191 | orchestrator | Monday 08 September 2025 01:07:00 +0000 (0:00:00.063) 0:01:00.594 ****** 2025-09-08 01:07:10.029202 | orchestrator | 2025-09-08 01:07:10.029213 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-08 01:07:10.029227 | orchestrator | Monday 08 September 2025 01:07:00 +0000 (0:00:00.066) 0:01:00.661 ****** 2025-09-08 01:07:10.029240 | orchestrator | 2025-09-08 01:07:10.029253 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-08 01:07:10.029267 | orchestrator | Monday 08 September 2025 01:07:00 +0000 (0:00:00.065) 0:01:00.726 ****** 2025-09-08 01:07:10.029280 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:07:10.029293 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:07:10.029306 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:07:10.029319 | orchestrator | 2025-09-08 01:07:10.029332 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:07:10.029348 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:07:10.029363 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:07:10.029377 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:07:10.029390 | orchestrator | 2025-09-08 01:07:10.029404 | orchestrator | 2025-09-08 01:07:10.029417 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:07:10.029436 | orchestrator | Monday 08 September 2025 01:07:06 +0000 (0:00:06.214) 0:01:06.941 ****** 2025-09-08 01:07:10.029449 | orchestrator | =============================================================================== 2025-09-08 01:07:10.029462 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.92s 2025-09-08 01:07:10.029475 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.18s 2025-09-08 01:07:10.029489 | orchestrator | placement : Restart placement-api container ----------------------------- 6.21s 2025-09-08 01:07:10.029502 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.30s 2025-09-08 01:07:10.029516 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.57s 2025-09-08 01:07:10.029528 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.51s 2025-09-08 01:07:10.029541 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.40s 2025-09-08 01:07:10.029556 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.35s 2025-09-08 01:07:10.029569 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.63s 2025-09-08 01:07:10.029587 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.55s 2025-09-08 01:07:10.029598 | orchestrator | placement : Creating placement databases -------------------------------- 2.26s 2025-09-08 01:07:10.029609 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.85s 2025-09-08 01:07:10.029619 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.63s 2025-09-08 01:07:10.029630 | orchestrator | placement : Copying over config.json files for services ----------------- 1.61s 2025-09-08 01:07:10.029641 | orchestrator | placement : Check placement containers ---------------------------------- 1.60s 2025-09-08 01:07:10.029652 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.49s 2025-09-08 01:07:10.029662 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.07s 2025-09-08 01:07:10.029741 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.98s 2025-09-08 01:07:10.029752 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.81s 2025-09-08 01:07:10.029762 | orchestrator | placement : include_tasks ----------------------------------------------- 0.63s 2025-09-08 01:07:10.029772 | orchestrator | 2025-09-08 01:07:10 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:10.030882 | orchestrator | 2025-09-08 01:07:10 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:10.032574 | orchestrator | 2025-09-08 01:07:10 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:10.032593 | orchestrator | 2025-09-08 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:13.084920 | orchestrator | 2025-09-08 01:07:13 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:13.085046 | orchestrator | 2025-09-08 01:07:13 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:13.087430 | orchestrator | 2025-09-08 01:07:13 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:13.089252 | orchestrator | 2025-09-08 01:07:13 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:13.089292 | orchestrator | 2025-09-08 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:16.137910 | orchestrator | 2025-09-08 01:07:16 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:16.139204 | orchestrator | 2025-09-08 01:07:16 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:16.144223 | orchestrator | 2025-09-08 01:07:16 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:16.144248 | orchestrator | 2025-09-08 01:07:16 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:16.144260 | orchestrator | 2025-09-08 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:19.181564 | orchestrator | 2025-09-08 01:07:19 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:19.182930 | orchestrator | 2025-09-08 01:07:19 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:19.184494 | orchestrator | 2025-09-08 01:07:19 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:19.185270 | orchestrator | 2025-09-08 01:07:19 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:19.185296 | orchestrator | 2025-09-08 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:22.228327 | orchestrator | 2025-09-08 01:07:22 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:22.228657 | orchestrator | 2025-09-08 01:07:22 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:22.229651 | orchestrator | 2025-09-08 01:07:22 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:22.230565 | orchestrator | 2025-09-08 01:07:22 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:22.230593 | orchestrator | 2025-09-08 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:25.269043 | orchestrator | 2025-09-08 01:07:25 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:25.269871 | orchestrator | 2025-09-08 01:07:25 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:25.270735 | orchestrator | 2025-09-08 01:07:25 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:25.271828 | orchestrator | 2025-09-08 01:07:25 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:25.271852 | orchestrator | 2025-09-08 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:28.308291 | orchestrator | 2025-09-08 01:07:28 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:28.308557 | orchestrator | 2025-09-08 01:07:28 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:28.309461 | orchestrator | 2025-09-08 01:07:28 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:28.310464 | orchestrator | 2025-09-08 01:07:28 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:28.310490 | orchestrator | 2025-09-08 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:31.390216 | orchestrator | 2025-09-08 01:07:31 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:31.391071 | orchestrator | 2025-09-08 01:07:31 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:31.391522 | orchestrator | 2025-09-08 01:07:31 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:31.393392 | orchestrator | 2025-09-08 01:07:31 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:31.393413 | orchestrator | 2025-09-08 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:34.432577 | orchestrator | 2025-09-08 01:07:34 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:34.434185 | orchestrator | 2025-09-08 01:07:34 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:34.435587 | orchestrator | 2025-09-08 01:07:34 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:34.437297 | orchestrator | 2025-09-08 01:07:34 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:34.437496 | orchestrator | 2025-09-08 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:37.477264 | orchestrator | 2025-09-08 01:07:37 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:37.480165 | orchestrator | 2025-09-08 01:07:37 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:37.482301 | orchestrator | 2025-09-08 01:07:37 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:37.483991 | orchestrator | 2025-09-08 01:07:37 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:37.484302 | orchestrator | 2025-09-08 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:40.519257 | orchestrator | 2025-09-08 01:07:40 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:40.519365 | orchestrator | 2025-09-08 01:07:40 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:40.520079 | orchestrator | 2025-09-08 01:07:40 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:40.521080 | orchestrator | 2025-09-08 01:07:40 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:40.521102 | orchestrator | 2025-09-08 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:43.560198 | orchestrator | 2025-09-08 01:07:43 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:43.560993 | orchestrator | 2025-09-08 01:07:43 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:43.562282 | orchestrator | 2025-09-08 01:07:43 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:43.563947 | orchestrator | 2025-09-08 01:07:43 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:43.564133 | orchestrator | 2025-09-08 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:46.598985 | orchestrator | 2025-09-08 01:07:46 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:46.599548 | orchestrator | 2025-09-08 01:07:46 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:46.600151 | orchestrator | 2025-09-08 01:07:46 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:46.601068 | orchestrator | 2025-09-08 01:07:46 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:46.601094 | orchestrator | 2025-09-08 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:49.638998 | orchestrator | 2025-09-08 01:07:49 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:49.641145 | orchestrator | 2025-09-08 01:07:49 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:49.643262 | orchestrator | 2025-09-08 01:07:49 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:49.644103 | orchestrator | 2025-09-08 01:07:49 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:49.644132 | orchestrator | 2025-09-08 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:52.675180 | orchestrator | 2025-09-08 01:07:52 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:52.675538 | orchestrator | 2025-09-08 01:07:52 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:52.676276 | orchestrator | 2025-09-08 01:07:52 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:52.677051 | orchestrator | 2025-09-08 01:07:52 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:52.677162 | orchestrator | 2025-09-08 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:55.702586 | orchestrator | 2025-09-08 01:07:55 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:55.705469 | orchestrator | 2025-09-08 01:07:55 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:55.707941 | orchestrator | 2025-09-08 01:07:55 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:55.710108 | orchestrator | 2025-09-08 01:07:55 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:55.710482 | orchestrator | 2025-09-08 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:07:58.741942 | orchestrator | 2025-09-08 01:07:58 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:07:58.743674 | orchestrator | 2025-09-08 01:07:58 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:07:58.745214 | orchestrator | 2025-09-08 01:07:58 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:07:58.746945 | orchestrator | 2025-09-08 01:07:58 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:07:58.747200 | orchestrator | 2025-09-08 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:01.797322 | orchestrator | 2025-09-08 01:08:01 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:01.797653 | orchestrator | 2025-09-08 01:08:01 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:01.798567 | orchestrator | 2025-09-08 01:08:01 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:01.799448 | orchestrator | 2025-09-08 01:08:01 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:01.799468 | orchestrator | 2025-09-08 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:04.838385 | orchestrator | 2025-09-08 01:08:04 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:04.841270 | orchestrator | 2025-09-08 01:08:04 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:04.843568 | orchestrator | 2025-09-08 01:08:04 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:04.846639 | orchestrator | 2025-09-08 01:08:04 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:04.846664 | orchestrator | 2025-09-08 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:07.890194 | orchestrator | 2025-09-08 01:08:07 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:07.891996 | orchestrator | 2025-09-08 01:08:07 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:07.893580 | orchestrator | 2025-09-08 01:08:07 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:07.894804 | orchestrator | 2025-09-08 01:08:07 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:07.895001 | orchestrator | 2025-09-08 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:10.938669 | orchestrator | 2025-09-08 01:08:10 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:10.940321 | orchestrator | 2025-09-08 01:08:10 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:10.941223 | orchestrator | 2025-09-08 01:08:10 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:10.942480 | orchestrator | 2025-09-08 01:08:10 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:10.942507 | orchestrator | 2025-09-08 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:13.991850 | orchestrator | 2025-09-08 01:08:13 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:13.993358 | orchestrator | 2025-09-08 01:08:13 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:13.994975 | orchestrator | 2025-09-08 01:08:13 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:13.996738 | orchestrator | 2025-09-08 01:08:13 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:13.996872 | orchestrator | 2025-09-08 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:17.040519 | orchestrator | 2025-09-08 01:08:17 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:17.041301 | orchestrator | 2025-09-08 01:08:17 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:17.042119 | orchestrator | 2025-09-08 01:08:17 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:17.043462 | orchestrator | 2025-09-08 01:08:17 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:17.043491 | orchestrator | 2025-09-08 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:20.099505 | orchestrator | 2025-09-08 01:08:20 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:20.102135 | orchestrator | 2025-09-08 01:08:20 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:20.104626 | orchestrator | 2025-09-08 01:08:20 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:20.106592 | orchestrator | 2025-09-08 01:08:20 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:20.106617 | orchestrator | 2025-09-08 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:23.148823 | orchestrator | 2025-09-08 01:08:23 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:23.150227 | orchestrator | 2025-09-08 01:08:23 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:23.151968 | orchestrator | 2025-09-08 01:08:23 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:23.155915 | orchestrator | 2025-09-08 01:08:23 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:23.155965 | orchestrator | 2025-09-08 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:26.200464 | orchestrator | 2025-09-08 01:08:26 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:26.203709 | orchestrator | 2025-09-08 01:08:26 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:26.206445 | orchestrator | 2025-09-08 01:08:26 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:26.208834 | orchestrator | 2025-09-08 01:08:26 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:26.208922 | orchestrator | 2025-09-08 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:29.250590 | orchestrator | 2025-09-08 01:08:29 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:29.251625 | orchestrator | 2025-09-08 01:08:29 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:29.253907 | orchestrator | 2025-09-08 01:08:29 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:29.255046 | orchestrator | 2025-09-08 01:08:29 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:29.255073 | orchestrator | 2025-09-08 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:32.291935 | orchestrator | 2025-09-08 01:08:32 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:32.292519 | orchestrator | 2025-09-08 01:08:32 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:32.293529 | orchestrator | 2025-09-08 01:08:32 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:32.294565 | orchestrator | 2025-09-08 01:08:32 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state STARTED 2025-09-08 01:08:32.294587 | orchestrator | 2025-09-08 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:35.339374 | orchestrator | 2025-09-08 01:08:35 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state STARTED 2025-09-08 01:08:35.340957 | orchestrator | 2025-09-08 01:08:35 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:35.344084 | orchestrator | 2025-09-08 01:08:35 | INFO  | Task 429ea3c3-ef06-40ac-b740-be73f57d280a is in state STARTED 2025-09-08 01:08:35.345661 | orchestrator | 2025-09-08 01:08:35 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:35.348248 | orchestrator | 2025-09-08 01:08:35.348276 | orchestrator | 2025-09-08 01:08:35 | INFO  | Task 303971e8-fa9e-4e5f-94ec-f869875363d6 is in state SUCCESS 2025-09-08 01:08:35.350345 | orchestrator | 2025-09-08 01:08:35.350378 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:08:35.350390 | orchestrator | 2025-09-08 01:08:35.350402 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:08:35.350413 | orchestrator | Monday 08 September 2025 01:06:36 +0000 (0:00:00.263) 0:00:00.263 ****** 2025-09-08 01:08:35.350425 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:35.350437 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:08:35.350449 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:08:35.350460 | orchestrator | 2025-09-08 01:08:35.350471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:08:35.350482 | orchestrator | Monday 08 September 2025 01:06:37 +0000 (0:00:00.308) 0:00:00.572 ****** 2025-09-08 01:08:35.350493 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-08 01:08:35.350505 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-08 01:08:35.350515 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-08 01:08:35.350527 | orchestrator | 2025-09-08 01:08:35.350538 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-08 01:08:35.350549 | orchestrator | 2025-09-08 01:08:35.350560 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-08 01:08:35.350571 | orchestrator | Monday 08 September 2025 01:06:37 +0000 (0:00:00.442) 0:00:01.015 ****** 2025-09-08 01:08:35.350582 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:35.350594 | orchestrator | 2025-09-08 01:08:35.350604 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-08 01:08:35.350615 | orchestrator | Monday 08 September 2025 01:06:38 +0000 (0:00:00.616) 0:00:01.631 ****** 2025-09-08 01:08:35.350627 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-08 01:08:35.350638 | orchestrator | 2025-09-08 01:08:35.350649 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-08 01:08:35.350660 | orchestrator | Monday 08 September 2025 01:06:41 +0000 (0:00:03.647) 0:00:05.279 ****** 2025-09-08 01:08:35.350670 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-08 01:08:35.350718 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-08 01:08:35.350730 | orchestrator | 2025-09-08 01:08:35.350823 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-08 01:08:35.350838 | orchestrator | Monday 08 September 2025 01:06:48 +0000 (0:00:06.998) 0:00:12.277 ****** 2025-09-08 01:08:35.350850 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:08:35.350861 | orchestrator | 2025-09-08 01:08:35.350871 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-08 01:08:35.350882 | orchestrator | Monday 08 September 2025 01:06:51 +0000 (0:00:03.180) 0:00:15.458 ****** 2025-09-08 01:08:35.350893 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:08:35.350948 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-08 01:08:35.350964 | orchestrator | 2025-09-08 01:08:35.350977 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-08 01:08:35.350989 | orchestrator | Monday 08 September 2025 01:06:55 +0000 (0:00:03.980) 0:00:19.438 ****** 2025-09-08 01:08:35.351002 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:08:35.351015 | orchestrator | 2025-09-08 01:08:35.351029 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-08 01:08:35.351042 | orchestrator | Monday 08 September 2025 01:06:59 +0000 (0:00:03.434) 0:00:22.873 ****** 2025-09-08 01:08:35.351055 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-08 01:08:35.351067 | orchestrator | 2025-09-08 01:08:35.351080 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-08 01:08:35.351093 | orchestrator | Monday 08 September 2025 01:07:03 +0000 (0:00:04.130) 0:00:27.004 ****** 2025-09-08 01:08:35.351108 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.351121 | orchestrator | 2025-09-08 01:08:35.351134 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-08 01:08:35.351148 | orchestrator | Monday 08 September 2025 01:07:07 +0000 (0:00:03.771) 0:00:30.775 ****** 2025-09-08 01:08:35.351162 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.351175 | orchestrator | 2025-09-08 01:08:35.351188 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-08 01:08:35.351200 | orchestrator | Monday 08 September 2025 01:07:11 +0000 (0:00:04.241) 0:00:35.016 ****** 2025-09-08 01:08:35.351214 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.351227 | orchestrator | 2025-09-08 01:08:35.351239 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-08 01:08:35.351252 | orchestrator | Monday 08 September 2025 01:07:15 +0000 (0:00:04.198) 0:00:39.215 ****** 2025-09-08 01:08:35.351284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351385 | orchestrator | 2025-09-08 01:08:35.351396 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-08 01:08:35.351407 | orchestrator | Monday 08 September 2025 01:07:17 +0000 (0:00:01.497) 0:00:40.713 ****** 2025-09-08 01:08:35.351430 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:35.351441 | orchestrator | 2025-09-08 01:08:35.351452 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-08 01:08:35.351469 | orchestrator | Monday 08 September 2025 01:07:17 +0000 (0:00:00.134) 0:00:40.847 ****** 2025-09-08 01:08:35.351480 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:35.351492 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:35.351502 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:35.351513 | orchestrator | 2025-09-08 01:08:35.351524 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-08 01:08:35.351535 | orchestrator | Monday 08 September 2025 01:07:17 +0000 (0:00:00.524) 0:00:41.372 ****** 2025-09-08 01:08:35.351546 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:08:35.351557 | orchestrator | 2025-09-08 01:08:35.351568 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-08 01:08:35.351579 | orchestrator | Monday 08 September 2025 01:07:18 +0000 (0:00:00.939) 0:00:42.312 ****** 2025-09-08 01:08:35.351591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351717 | orchestrator | 2025-09-08 01:08:35.351728 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-08 01:08:35.351739 | orchestrator | Monday 08 September 2025 01:07:21 +0000 (0:00:02.518) 0:00:44.830 ****** 2025-09-08 01:08:35.351756 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:35.351768 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:08:35.351778 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:08:35.351789 | orchestrator | 2025-09-08 01:08:35.351832 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-08 01:08:35.351843 | orchestrator | Monday 08 September 2025 01:07:21 +0000 (0:00:00.398) 0:00:45.228 ****** 2025-09-08 01:08:35.351854 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:35.351865 | orchestrator | 2025-09-08 01:08:35.351876 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-08 01:08:35.351886 | orchestrator | Monday 08 September 2025 01:07:22 +0000 (0:00:00.807) 0:00:46.036 ****** 2025-09-08 01:08:35.351898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.351949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.351995 | orchestrator | 2025-09-08 01:08:35.352006 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-08 01:08:35.352017 | orchestrator | Monday 08 September 2025 01:07:26 +0000 (0:00:03.806) 0:00:49.842 ****** 2025-09-08 01:08:35.352035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352058 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:35.352083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352107 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:35.352119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352159 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:35.352171 | orchestrator | 2025-09-08 01:08:35.352182 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-08 01:08:35.352193 | orchestrator | Monday 08 September 2025 01:07:27 +0000 (0:00:01.448) 0:00:51.291 ****** 2025-09-08 01:08:35.352204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352232 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:35.352244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352282 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:35.352293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352316 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:35.352327 | orchestrator | 2025-09-08 01:08:35.352338 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-08 01:08:35.352349 | orchestrator | Monday 08 September 2025 01:07:29 +0000 (0:00:01.417) 0:00:52.709 ****** 2025-09-08 01:08:35.352365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.352384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.352664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.352684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.352703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.352715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.352736 | orchestrator | 2025-09-08 01:08:35.352747 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-08 01:08:35.352758 | orchestrator | Monday 08 September 2025 01:07:31 +0000 (0:00:02.409) 0:00:55.119 ****** 2025-09-08 01:08:35.352769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.352788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.352820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.352837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.352855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.352867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.352878 | orchestrator | 2025-09-08 01:08:35.352890 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-08 01:08:35.352906 | orchestrator | Monday 08 September 2025 01:07:36 +0000 (0:00:05.173) 0:01:00.292 ****** 2025-09-08 01:08:35.352918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352941 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:35.352957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.352977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.352988 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:35.353006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-08 01:08:35.353018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:35.353030 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:35.353041 | orchestrator | 2025-09-08 01:08:35.353052 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-08 01:08:35.353063 | orchestrator | Monday 08 September 2025 01:07:37 +0000 (0:00:00.854) 0:01:01.147 ****** 2025-09-08 01:08:35.353080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.353101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.353112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-08 01:08:35.353154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.353167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.353184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:35.353209 | orchestrator | 2025-09-08 01:08:35.353221 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-08 01:08:35.353232 | orchestrator | Monday 08 September 2025 01:07:39 +0000 (0:00:02.156) 0:01:03.303 ****** 2025-09-08 01:08:35.353243 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:35.353254 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:35.353267 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:35.353280 | orchestrator | 2025-09-08 01:08:35.353293 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-08 01:08:35.353306 | orchestrator | Monday 08 September 2025 01:07:40 +0000 (0:00:00.305) 0:01:03.608 ****** 2025-09-08 01:08:35.353319 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.353332 | orchestrator | 2025-09-08 01:08:35.353345 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-08 01:08:35.353358 | orchestrator | Monday 08 September 2025 01:07:42 +0000 (0:00:02.213) 0:01:05.822 ****** 2025-09-08 01:08:35.353371 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.353384 | orchestrator | 2025-09-08 01:08:35.353396 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-08 01:08:35.353410 | orchestrator | Monday 08 September 2025 01:07:44 +0000 (0:00:02.350) 0:01:08.173 ****** 2025-09-08 01:08:35.353422 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.353435 | orchestrator | 2025-09-08 01:08:35.353449 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-08 01:08:35.353462 | orchestrator | Monday 08 September 2025 01:08:01 +0000 (0:00:16.623) 0:01:24.796 ****** 2025-09-08 01:08:35.353475 | orchestrator | 2025-09-08 01:08:35.353488 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-08 01:08:35.353501 | orchestrator | Monday 08 September 2025 01:08:01 +0000 (0:00:00.076) 0:01:24.873 ****** 2025-09-08 01:08:35.353513 | orchestrator | 2025-09-08 01:08:35.353526 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-08 01:08:35.353540 | orchestrator | Monday 08 September 2025 01:08:01 +0000 (0:00:00.065) 0:01:24.939 ****** 2025-09-08 01:08:35.353552 | orchestrator | 2025-09-08 01:08:35.353565 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-08 01:08:35.353577 | orchestrator | Monday 08 September 2025 01:08:01 +0000 (0:00:00.071) 0:01:25.010 ****** 2025-09-08 01:08:35.353590 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.353603 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:35.353617 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:35.353629 | orchestrator | 2025-09-08 01:08:35.353640 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-08 01:08:35.353651 | orchestrator | Monday 08 September 2025 01:08:17 +0000 (0:00:15.751) 0:01:40.762 ****** 2025-09-08 01:08:35.353662 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:35.353673 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:35.353683 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:35.353694 | orchestrator | 2025-09-08 01:08:35.353710 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:08:35.353723 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-08 01:08:35.353735 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:08:35.353746 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:08:35.353764 | orchestrator | 2025-09-08 01:08:35.353775 | orchestrator | 2025-09-08 01:08:35.353786 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:08:35.354114 | orchestrator | Monday 08 September 2025 01:08:32 +0000 (0:00:15.649) 0:01:56.411 ****** 2025-09-08 01:08:35.354237 | orchestrator | =============================================================================== 2025-09-08 01:08:35.354252 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.62s 2025-09-08 01:08:35.354263 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.75s 2025-09-08 01:08:35.354274 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.65s 2025-09-08 01:08:35.354286 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.00s 2025-09-08 01:08:35.354297 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.17s 2025-09-08 01:08:35.354308 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.24s 2025-09-08 01:08:35.354319 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.20s 2025-09-08 01:08:35.354330 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.13s 2025-09-08 01:08:35.354340 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.98s 2025-09-08 01:08:35.354351 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.81s 2025-09-08 01:08:35.354362 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.77s 2025-09-08 01:08:35.354372 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.65s 2025-09-08 01:08:35.354383 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.43s 2025-09-08 01:08:35.354394 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.18s 2025-09-08 01:08:35.354405 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.52s 2025-09-08 01:08:35.354416 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.41s 2025-09-08 01:08:35.354453 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.35s 2025-09-08 01:08:35.354465 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.21s 2025-09-08 01:08:35.354476 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.16s 2025-09-08 01:08:35.354487 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.50s 2025-09-08 01:08:35.354498 | orchestrator | 2025-09-08 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:38.395599 | orchestrator | 2025-09-08 01:08:38 | INFO  | Task f3f1d995-30d1-49d3-9200-118b434d5c71 is in state SUCCESS 2025-09-08 01:08:38.399160 | orchestrator | 2025-09-08 01:08:38.399204 | orchestrator | 2025-09-08 01:08:38.399218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:08:38.399231 | orchestrator | 2025-09-08 01:08:38.399243 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-08 01:08:38.399255 | orchestrator | Monday 08 September 2025 00:59:34 +0000 (0:00:00.287) 0:00:00.287 ****** 2025-09-08 01:08:38.399267 | orchestrator | changed: [testbed-manager] 2025-09-08 01:08:38.399280 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.399291 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.399302 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.399313 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.399324 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.399335 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.399346 | orchestrator | 2025-09-08 01:08:38.399357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:08:38.399367 | orchestrator | Monday 08 September 2025 00:59:35 +0000 (0:00:01.366) 0:00:01.654 ****** 2025-09-08 01:08:38.399410 | orchestrator | changed: [testbed-manager] 2025-09-08 01:08:38.399422 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.399433 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.399444 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.399455 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.399467 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.399478 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.399489 | orchestrator | 2025-09-08 01:08:38.399544 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:08:38.399583 | orchestrator | Monday 08 September 2025 00:59:36 +0000 (0:00:00.681) 0:00:02.335 ****** 2025-09-08 01:08:38.399595 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-08 01:08:38.399622 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-08 01:08:38.399633 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-08 01:08:38.399644 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-08 01:08:38.399668 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-08 01:08:38.399679 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-08 01:08:38.399690 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-08 01:08:38.399772 | orchestrator | 2025-09-08 01:08:38.399786 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-08 01:08:38.399840 | orchestrator | 2025-09-08 01:08:38.399854 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-08 01:08:38.399867 | orchestrator | Monday 08 September 2025 00:59:37 +0000 (0:00:00.864) 0:00:03.200 ****** 2025-09-08 01:08:38.399880 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:38.399893 | orchestrator | 2025-09-08 01:08:38.399906 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-08 01:08:38.399919 | orchestrator | Monday 08 September 2025 00:59:38 +0000 (0:00:00.719) 0:00:03.919 ****** 2025-09-08 01:08:38.399934 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-08 01:08:38.399948 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-08 01:08:38.399962 | orchestrator | 2025-09-08 01:08:38.399975 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-08 01:08:38.399989 | orchestrator | Monday 08 September 2025 00:59:41 +0000 (0:00:03.258) 0:00:07.177 ****** 2025-09-08 01:08:38.400001 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 01:08:38.400014 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-08 01:08:38.400027 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.400040 | orchestrator | 2025-09-08 01:08:38.400053 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-08 01:08:38.400066 | orchestrator | Monday 08 September 2025 00:59:45 +0000 (0:00:03.928) 0:00:11.106 ****** 2025-09-08 01:08:38.400080 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.400093 | orchestrator | 2025-09-08 01:08:38.400106 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-08 01:08:38.400117 | orchestrator | Monday 08 September 2025 00:59:46 +0000 (0:00:00.939) 0:00:12.045 ****** 2025-09-08 01:08:38.400128 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.400139 | orchestrator | 2025-09-08 01:08:38.400150 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-08 01:08:38.400161 | orchestrator | Monday 08 September 2025 00:59:48 +0000 (0:00:02.143) 0:00:14.188 ****** 2025-09-08 01:08:38.400171 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.400182 | orchestrator | 2025-09-08 01:08:38.400193 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:08:38.400204 | orchestrator | Monday 08 September 2025 00:59:52 +0000 (0:00:03.966) 0:00:18.155 ****** 2025-09-08 01:08:38.400215 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.400225 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.400246 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.400257 | orchestrator | 2025-09-08 01:08:38.400268 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-08 01:08:38.400279 | orchestrator | Monday 08 September 2025 00:59:52 +0000 (0:00:00.375) 0:00:18.530 ****** 2025-09-08 01:08:38.400290 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.400301 | orchestrator | 2025-09-08 01:08:38.400312 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-08 01:08:38.400322 | orchestrator | Monday 08 September 2025 01:00:22 +0000 (0:00:29.577) 0:00:48.109 ****** 2025-09-08 01:08:38.400333 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.400344 | orchestrator | 2025-09-08 01:08:38.400355 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-08 01:08:38.400366 | orchestrator | Monday 08 September 2025 01:00:35 +0000 (0:00:13.529) 0:01:01.639 ****** 2025-09-08 01:08:38.400377 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.400387 | orchestrator | 2025-09-08 01:08:38.400398 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-08 01:08:38.400409 | orchestrator | Monday 08 September 2025 01:00:47 +0000 (0:00:11.274) 0:01:12.913 ****** 2025-09-08 01:08:38.400434 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.400446 | orchestrator | 2025-09-08 01:08:38.400457 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-08 01:08:38.400467 | orchestrator | Monday 08 September 2025 01:00:48 +0000 (0:00:01.234) 0:01:14.147 ****** 2025-09-08 01:08:38.400478 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.400489 | orchestrator | 2025-09-08 01:08:38.400500 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:08:38.400511 | orchestrator | Monday 08 September 2025 01:00:49 +0000 (0:00:01.038) 0:01:15.186 ****** 2025-09-08 01:08:38.400523 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:38.400534 | orchestrator | 2025-09-08 01:08:38.400545 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-08 01:08:38.400555 | orchestrator | Monday 08 September 2025 01:00:50 +0000 (0:00:00.919) 0:01:16.105 ****** 2025-09-08 01:08:38.400566 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.400577 | orchestrator | 2025-09-08 01:08:38.400588 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-08 01:08:38.400599 | orchestrator | Monday 08 September 2025 01:01:08 +0000 (0:00:18.630) 0:01:34.736 ****** 2025-09-08 01:08:38.400610 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.400621 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.400632 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.400642 | orchestrator | 2025-09-08 01:08:38.400653 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-08 01:08:38.400664 | orchestrator | 2025-09-08 01:08:38.400675 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-08 01:08:38.400686 | orchestrator | Monday 08 September 2025 01:01:09 +0000 (0:00:00.332) 0:01:35.068 ****** 2025-09-08 01:08:38.400697 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:38.400708 | orchestrator | 2025-09-08 01:08:38.400718 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-08 01:08:38.400729 | orchestrator | Monday 08 September 2025 01:01:09 +0000 (0:00:00.599) 0:01:35.668 ****** 2025-09-08 01:08:38.400740 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.400751 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.400761 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.400772 | orchestrator | 2025-09-08 01:08:38.400783 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-08 01:08:38.400794 | orchestrator | Monday 08 September 2025 01:01:11 +0000 (0:00:02.013) 0:01:37.681 ****** 2025-09-08 01:08:38.400823 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.400842 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.400854 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.400864 | orchestrator | 2025-09-08 01:08:38.400875 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-08 01:08:38.400886 | orchestrator | Monday 08 September 2025 01:01:14 +0000 (0:00:02.510) 0:01:40.191 ****** 2025-09-08 01:08:38.400897 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.400908 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.400919 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.400929 | orchestrator | 2025-09-08 01:08:38.400940 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-08 01:08:38.400951 | orchestrator | Monday 08 September 2025 01:01:14 +0000 (0:00:00.627) 0:01:40.819 ****** 2025-09-08 01:08:38.400962 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 01:08:38.400973 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.400984 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 01:08:38.400995 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401005 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-08 01:08:38.401016 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-08 01:08:38.401027 | orchestrator | 2025-09-08 01:08:38.401038 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-08 01:08:38.401049 | orchestrator | Monday 08 September 2025 01:01:24 +0000 (0:00:09.166) 0:01:49.986 ****** 2025-09-08 01:08:38.401060 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.401071 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401081 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401092 | orchestrator | 2025-09-08 01:08:38.401103 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-08 01:08:38.401114 | orchestrator | Monday 08 September 2025 01:01:24 +0000 (0:00:00.410) 0:01:50.397 ****** 2025-09-08 01:08:38.401125 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-08 01:08:38.401135 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.401146 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-08 01:08:38.401157 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-08 01:08:38.401168 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401178 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401189 | orchestrator | 2025-09-08 01:08:38.401200 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-08 01:08:38.401211 | orchestrator | Monday 08 September 2025 01:01:25 +0000 (0:00:00.894) 0:01:51.291 ****** 2025-09-08 01:08:38.401222 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401232 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.401243 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401254 | orchestrator | 2025-09-08 01:08:38.401265 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-08 01:08:38.401276 | orchestrator | Monday 08 September 2025 01:01:26 +0000 (0:00:01.313) 0:01:52.604 ****** 2025-09-08 01:08:38.401287 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401297 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401308 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.401319 | orchestrator | 2025-09-08 01:08:38.401330 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-08 01:08:38.401341 | orchestrator | Monday 08 September 2025 01:01:27 +0000 (0:00:01.134) 0:01:53.739 ****** 2025-09-08 01:08:38.401352 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401363 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401380 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.401391 | orchestrator | 2025-09-08 01:08:38.401402 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-08 01:08:38.401413 | orchestrator | Monday 08 September 2025 01:01:30 +0000 (0:00:03.017) 0:01:56.757 ****** 2025-09-08 01:08:38.401424 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401442 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401453 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.401464 | orchestrator | 2025-09-08 01:08:38.401475 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-08 01:08:38.401486 | orchestrator | Monday 08 September 2025 01:01:52 +0000 (0:00:22.065) 0:02:18.822 ****** 2025-09-08 01:08:38.401497 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401508 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401519 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.401530 | orchestrator | 2025-09-08 01:08:38.401540 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-08 01:08:38.401551 | orchestrator | Monday 08 September 2025 01:02:05 +0000 (0:00:12.097) 0:02:30.919 ****** 2025-09-08 01:08:38.401562 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.401573 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401584 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401594 | orchestrator | 2025-09-08 01:08:38.401605 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-08 01:08:38.401616 | orchestrator | Monday 08 September 2025 01:02:06 +0000 (0:00:01.051) 0:02:31.970 ****** 2025-09-08 01:08:38.401627 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401638 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401649 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.401660 | orchestrator | 2025-09-08 01:08:38.401670 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-08 01:08:38.401681 | orchestrator | Monday 08 September 2025 01:02:17 +0000 (0:00:11.500) 0:02:43.471 ****** 2025-09-08 01:08:38.401692 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.401703 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401714 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401725 | orchestrator | 2025-09-08 01:08:38.401736 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-08 01:08:38.401746 | orchestrator | Monday 08 September 2025 01:02:19 +0000 (0:00:01.551) 0:02:45.022 ****** 2025-09-08 01:08:38.401757 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.401768 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.401779 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.401790 | orchestrator | 2025-09-08 01:08:38.401851 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-08 01:08:38.401864 | orchestrator | 2025-09-08 01:08:38.401875 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:08:38.401886 | orchestrator | Monday 08 September 2025 01:02:19 +0000 (0:00:00.331) 0:02:45.354 ****** 2025-09-08 01:08:38.401897 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:38.401908 | orchestrator | 2025-09-08 01:08:38.401919 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-08 01:08:38.401930 | orchestrator | Monday 08 September 2025 01:02:20 +0000 (0:00:00.550) 0:02:45.904 ****** 2025-09-08 01:08:38.401940 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-08 01:08:38.401950 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-08 01:08:38.401960 | orchestrator | 2025-09-08 01:08:38.401969 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-08 01:08:38.401979 | orchestrator | Monday 08 September 2025 01:02:23 +0000 (0:00:03.416) 0:02:49.321 ****** 2025-09-08 01:08:38.401989 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-08 01:08:38.402000 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-08 01:08:38.402010 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-08 01:08:38.402080 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-08 01:08:38.402092 | orchestrator | 2025-09-08 01:08:38.402102 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-08 01:08:38.402111 | orchestrator | Monday 08 September 2025 01:02:29 +0000 (0:00:06.176) 0:02:55.498 ****** 2025-09-08 01:08:38.402121 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:08:38.402131 | orchestrator | 2025-09-08 01:08:38.402140 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-08 01:08:38.402150 | orchestrator | Monday 08 September 2025 01:02:33 +0000 (0:00:03.549) 0:02:59.047 ****** 2025-09-08 01:08:38.402160 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:08:38.402169 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-08 01:08:38.402179 | orchestrator | 2025-09-08 01:08:38.402189 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-08 01:08:38.402198 | orchestrator | Monday 08 September 2025 01:02:37 +0000 (0:00:03.875) 0:03:02.922 ****** 2025-09-08 01:08:38.402208 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:08:38.402218 | orchestrator | 2025-09-08 01:08:38.402228 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-08 01:08:38.402238 | orchestrator | Monday 08 September 2025 01:02:40 +0000 (0:00:03.336) 0:03:06.259 ****** 2025-09-08 01:08:38.402248 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-08 01:08:38.402257 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-08 01:08:38.402267 | orchestrator | 2025-09-08 01:08:38.402277 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-08 01:08:38.402294 | orchestrator | Monday 08 September 2025 01:02:47 +0000 (0:00:07.528) 0:03:13.787 ****** 2025-09-08 01:08:38.402310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.402326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.402346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.402365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.402378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.402402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.402413 | orchestrator | 2025-09-08 01:08:38.402433 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-08 01:08:38.402443 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:01.347) 0:03:15.134 ****** 2025-09-08 01:08:38.402453 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.402463 | orchestrator | 2025-09-08 01:08:38.402472 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-08 01:08:38.402482 | orchestrator | Monday 08 September 2025 01:02:49 +0000 (0:00:00.174) 0:03:15.309 ****** 2025-09-08 01:08:38.402500 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.402510 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.402519 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.402529 | orchestrator | 2025-09-08 01:08:38.402539 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-08 01:08:38.402549 | orchestrator | Monday 08 September 2025 01:02:50 +0000 (0:00:01.096) 0:03:16.405 ****** 2025-09-08 01:08:38.402558 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:08:38.402568 | orchestrator | 2025-09-08 01:08:38.402578 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-08 01:08:38.402587 | orchestrator | Monday 08 September 2025 01:02:51 +0000 (0:00:00.914) 0:03:17.320 ****** 2025-09-08 01:08:38.402597 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.402607 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.402616 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.402626 | orchestrator | 2025-09-08 01:08:38.402635 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-08 01:08:38.402645 | orchestrator | Monday 08 September 2025 01:02:51 +0000 (0:00:00.370) 0:03:17.690 ****** 2025-09-08 01:08:38.402655 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:38.402664 | orchestrator | 2025-09-08 01:08:38.402674 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-08 01:08:38.402683 | orchestrator | Monday 08 September 2025 01:02:52 +0000 (0:00:00.544) 0:03:18.234 ****** 2025-09-08 01:08:38.402700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.402712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.402730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.402742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.402753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.402771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.402782 | orchestrator | 2025-09-08 01:08:38.402792 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-08 01:08:38.402817 | orchestrator | Monday 08 September 2025 01:02:55 +0000 (0:00:03.333) 0:03:21.567 ****** 2025-09-08 01:08:38.402828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.402845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.402855 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.402866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.402883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.402893 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.402904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.402921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.402932 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.402941 | orchestrator | 2025-09-08 01:08:38.402951 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-08 01:08:38.402961 | orchestrator | Monday 08 September 2025 01:02:56 +0000 (0:00:01.142) 0:03:22.710 ****** 2025-09-08 01:08:38.402971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.402982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.402993 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.403011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.403029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.403039 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.403049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.403060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.403071 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.403080 | orchestrator | 2025-09-08 01:08:38.403090 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-08 01:08:38.403100 | orchestrator | Monday 08 September 2025 01:02:58 +0000 (0:00:01.371) 0:03:24.082 ****** 2025-09-08 01:08:38.403120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403255 | orchestrator | 2025-09-08 01:08:38.403289 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-08 01:08:38.403299 | orchestrator | Monday 08 September 2025 01:03:01 +0000 (0:00:03.004) 0:03:27.089 ****** 2025-09-08 01:08:38.403310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403460 | orchestrator | 2025-09-08 01:08:38.403523 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-08 01:08:38.403534 | orchestrator | Monday 08 September 2025 01:03:09 +0000 (0:00:07.856) 0:03:34.946 ****** 2025-09-08 01:08:38.403552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.403570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.403580 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.403600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.403612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.403622 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.403632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-08 01:08:38.403659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.403670 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.403680 | orchestrator | 2025-09-08 01:08:38.403689 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-08 01:08:38.403699 | orchestrator | Monday 08 September 2025 01:03:10 +0000 (0:00:00.920) 0:03:35.866 ****** 2025-09-08 01:08:38.403709 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.403718 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.403728 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.403737 | orchestrator | 2025-09-08 01:08:38.403747 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-08 01:08:38.403757 | orchestrator | Monday 08 September 2025 01:03:13 +0000 (0:00:03.242) 0:03:39.108 ****** 2025-09-08 01:08:38.403767 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.403776 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.403786 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.403795 | orchestrator | 2025-09-08 01:08:38.403856 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-08 01:08:38.403867 | orchestrator | Monday 08 September 2025 01:03:14 +0000 (0:00:00.985) 0:03:40.094 ****** 2025-09-08 01:08:38.403877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-08 01:08:38.403933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.403964 | orchestrator | 2025-09-08 01:08:38.403974 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-08 01:08:38.403984 | orchestrator | Monday 08 September 2025 01:03:16 +0000 (0:00:02.423) 0:03:42.517 ****** 2025-09-08 01:08:38.403993 | orchestrator | 2025-09-08 01:08:38.404003 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-08 01:08:38.404012 | orchestrator | Monday 08 September 2025 01:03:16 +0000 (0:00:00.261) 0:03:42.778 ****** 2025-09-08 01:08:38.404028 | orchestrator | 2025-09-08 01:08:38.404038 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-08 01:08:38.404048 | orchestrator | Monday 08 September 2025 01:03:17 +0000 (0:00:00.269) 0:03:43.048 ****** 2025-09-08 01:08:38.404057 | orchestrator | 2025-09-08 01:08:38.404067 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-08 01:08:38.404076 | orchestrator | Monday 08 September 2025 01:03:17 +0000 (0:00:00.320) 0:03:43.369 ****** 2025-09-08 01:08:38.404086 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.404095 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.404105 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.404114 | orchestrator | 2025-09-08 01:08:38.404124 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-08 01:08:38.404133 | orchestrator | Monday 08 September 2025 01:03:39 +0000 (0:00:21.686) 0:04:05.055 ****** 2025-09-08 01:08:38.404143 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.404152 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.404162 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.404172 | orchestrator | 2025-09-08 01:08:38.404181 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-08 01:08:38.404191 | orchestrator | 2025-09-08 01:08:38.404201 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:08:38.404210 | orchestrator | Monday 08 September 2025 01:03:47 +0000 (0:00:07.813) 0:04:12.869 ****** 2025-09-08 01:08:38.404220 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:38.404232 | orchestrator | 2025-09-08 01:08:38.404247 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:08:38.404257 | orchestrator | Monday 08 September 2025 01:03:48 +0000 (0:00:01.737) 0:04:14.606 ****** 2025-09-08 01:08:38.404267 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.404275 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.404283 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.404291 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.404299 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.404307 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.404315 | orchestrator | 2025-09-08 01:08:38.404323 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-08 01:08:38.404330 | orchestrator | Monday 08 September 2025 01:03:49 +0000 (0:00:00.788) 0:04:15.394 ****** 2025-09-08 01:08:38.404338 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.404346 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.404354 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.404362 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:08:38.404370 | orchestrator | 2025-09-08 01:08:38.404378 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-08 01:08:38.404386 | orchestrator | Monday 08 September 2025 01:03:51 +0000 (0:00:01.804) 0:04:17.199 ****** 2025-09-08 01:08:38.404394 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-08 01:08:38.404402 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-08 01:08:38.404409 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-08 01:08:38.404417 | orchestrator | 2025-09-08 01:08:38.404425 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-08 01:08:38.404433 | orchestrator | Monday 08 September 2025 01:03:52 +0000 (0:00:00.916) 0:04:18.115 ****** 2025-09-08 01:08:38.404441 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-08 01:08:38.404449 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-08 01:08:38.404457 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-08 01:08:38.404465 | orchestrator | 2025-09-08 01:08:38.404472 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-08 01:08:38.404486 | orchestrator | Monday 08 September 2025 01:03:53 +0000 (0:00:01.486) 0:04:19.601 ****** 2025-09-08 01:08:38.404494 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-08 01:08:38.404502 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.404510 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-08 01:08:38.404518 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.404525 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-08 01:08:38.404533 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.404541 | orchestrator | 2025-09-08 01:08:38.404549 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-08 01:08:38.404557 | orchestrator | Monday 08 September 2025 01:03:54 +0000 (0:00:01.097) 0:04:20.698 ****** 2025-09-08 01:08:38.404565 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-08 01:08:38.404572 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-08 01:08:38.404580 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 01:08:38.404588 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 01:08:38.404596 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.404604 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 01:08:38.404612 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-08 01:08:38.404620 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 01:08:38.404628 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-08 01:08:38.404636 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-08 01:08:38.404644 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.404651 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-08 01:08:38.404659 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-08 01:08:38.404667 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.404675 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-08 01:08:38.404683 | orchestrator | 2025-09-08 01:08:38.404691 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-08 01:08:38.404698 | orchestrator | Monday 08 September 2025 01:03:56 +0000 (0:00:01.930) 0:04:22.629 ****** 2025-09-08 01:08:38.404706 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.404714 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.404722 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.404730 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.404738 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.404745 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.404753 | orchestrator | 2025-09-08 01:08:38.404761 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-08 01:08:38.404769 | orchestrator | Monday 08 September 2025 01:03:58 +0000 (0:00:02.071) 0:04:24.700 ****** 2025-09-08 01:08:38.404777 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.404785 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.404793 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.404815 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.404824 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.404831 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.404839 | orchestrator | 2025-09-08 01:08:38.404847 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-08 01:08:38.404855 | orchestrator | Monday 08 September 2025 01:04:01 +0000 (0:00:02.725) 0:04:27.426 ****** 2025-09-08 01:08:38.405285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405455 | orchestrator | 2025-09-08 01:08:38.405463 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:08:38.405471 | orchestrator | Monday 08 September 2025 01:04:05 +0000 (0:00:04.040) 0:04:31.466 ****** 2025-09-08 01:08:38.405479 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:08:38.405489 | orchestrator | 2025-09-08 01:08:38.405497 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-08 01:08:38.405505 | orchestrator | Monday 08 September 2025 01:04:07 +0000 (0:00:02.009) 0:04:33.475 ****** 2025-09-08 01:08:38.405513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.405670 | orchestrator | 2025-09-08 01:08:38.405678 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-08 01:08:38.405686 | orchestrator | Monday 08 September 2025 01:04:12 +0000 (0:00:04.515) 0:04:37.991 ****** 2025-09-08 01:08:38.405694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.405704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.405712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.405727 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.405740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.405749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.405757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.405765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.405774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.405788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.405797 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.405824 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.405837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.405845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.405854 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.405862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.405870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.405879 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.405895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.405905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.405914 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.405924 | orchestrator | 2025-09-08 01:08:38.405934 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-08 01:08:38.405943 | orchestrator | Monday 08 September 2025 01:04:14 +0000 (0:00:02.081) 0:04:40.073 ****** 2025-09-08 01:08:38.405958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.405968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.405978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.405987 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.405997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.406126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.406149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.406160 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.406170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.406181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.406191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.406394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.406405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.406605 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.406618 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.406651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.406661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.406670 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.406678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.406686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.406701 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.406709 | orchestrator | 2025-09-08 01:08:38.406717 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:08:38.406725 | orchestrator | Monday 08 September 2025 01:04:17 +0000 (0:00:03.323) 0:04:43.396 ****** 2025-09-08 01:08:38.406733 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.406741 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.406749 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.406757 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-08 01:08:38.406765 | orchestrator | 2025-09-08 01:08:38.406773 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-08 01:08:38.406781 | orchestrator | Monday 08 September 2025 01:04:19 +0000 (0:00:02.379) 0:04:45.776 ****** 2025-09-08 01:08:38.406789 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:08:38.406797 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 01:08:38.406859 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 01:08:38.406876 | orchestrator | 2025-09-08 01:08:38.406884 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-08 01:08:38.406892 | orchestrator | Monday 08 September 2025 01:04:21 +0000 (0:00:01.345) 0:04:47.122 ****** 2025-09-08 01:08:38.406900 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-08 01:08:38.406908 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:08:38.406916 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-08 01:08:38.406923 | orchestrator | 2025-09-08 01:08:38.406931 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-08 01:08:38.406939 | orchestrator | Monday 08 September 2025 01:04:22 +0000 (0:00:01.310) 0:04:48.433 ****** 2025-09-08 01:08:38.406947 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:08:38.406955 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:08:38.406963 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:08:38.406971 | orchestrator | 2025-09-08 01:08:38.406979 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-08 01:08:38.406987 | orchestrator | Monday 08 September 2025 01:04:23 +0000 (0:00:01.052) 0:04:49.485 ****** 2025-09-08 01:08:38.406995 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:08:38.407003 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:08:38.407011 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:08:38.407019 | orchestrator | 2025-09-08 01:08:38.407027 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-08 01:08:38.407035 | orchestrator | Monday 08 September 2025 01:04:24 +0000 (0:00:01.030) 0:04:50.516 ****** 2025-09-08 01:08:38.407043 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-08 01:08:38.407076 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-08 01:08:38.407085 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-08 01:08:38.407093 | orchestrator | 2025-09-08 01:08:38.407101 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-08 01:08:38.407109 | orchestrator | Monday 08 September 2025 01:04:26 +0000 (0:00:01.358) 0:04:51.875 ****** 2025-09-08 01:08:38.407117 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-08 01:08:38.407125 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-08 01:08:38.407133 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-08 01:08:38.407147 | orchestrator | 2025-09-08 01:08:38.407155 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-08 01:08:38.407163 | orchestrator | Monday 08 September 2025 01:04:27 +0000 (0:00:01.586) 0:04:53.461 ****** 2025-09-08 01:08:38.407171 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-08 01:08:38.407179 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-08 01:08:38.407187 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-08 01:08:38.407195 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-08 01:08:38.407203 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-08 01:08:38.407210 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-08 01:08:38.407219 | orchestrator | 2025-09-08 01:08:38.407226 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-08 01:08:38.407234 | orchestrator | Monday 08 September 2025 01:04:33 +0000 (0:00:05.895) 0:04:59.357 ****** 2025-09-08 01:08:38.407241 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.407249 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.407257 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.407265 | orchestrator | 2025-09-08 01:08:38.407273 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-08 01:08:38.407281 | orchestrator | Monday 08 September 2025 01:04:33 +0000 (0:00:00.232) 0:04:59.590 ****** 2025-09-08 01:08:38.407289 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.407297 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.407306 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.407314 | orchestrator | 2025-09-08 01:08:38.407321 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-08 01:08:38.407329 | orchestrator | Monday 08 September 2025 01:04:34 +0000 (0:00:00.297) 0:04:59.887 ****** 2025-09-08 01:08:38.407337 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.407345 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.407352 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.407360 | orchestrator | 2025-09-08 01:08:38.407368 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-08 01:08:38.407376 | orchestrator | Monday 08 September 2025 01:04:36 +0000 (0:00:02.299) 0:05:02.186 ****** 2025-09-08 01:08:38.407385 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-08 01:08:38.407394 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-08 01:08:38.407403 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-08 01:08:38.407411 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-08 01:08:38.407420 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-08 01:08:38.407428 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-08 01:08:38.407435 | orchestrator | 2025-09-08 01:08:38.407444 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-08 01:08:38.407452 | orchestrator | Monday 08 September 2025 01:04:40 +0000 (0:00:03.809) 0:05:05.996 ****** 2025-09-08 01:08:38.407460 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 01:08:38.407468 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 01:08:38.407476 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 01:08:38.407484 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-08 01:08:38.407492 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.407504 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-08 01:08:38.407512 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.407520 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-08 01:08:38.407529 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.407537 | orchestrator | 2025-09-08 01:08:38.407545 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-08 01:08:38.407553 | orchestrator | Monday 08 September 2025 01:04:44 +0000 (0:00:03.911) 0:05:09.908 ****** 2025-09-08 01:08:38.407561 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.407569 | orchestrator | 2025-09-08 01:08:38.407578 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-08 01:08:38.407586 | orchestrator | Monday 08 September 2025 01:04:44 +0000 (0:00:00.143) 0:05:10.051 ****** 2025-09-08 01:08:38.407594 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.407603 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.407609 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.407616 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.407623 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.407630 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.407636 | orchestrator | 2025-09-08 01:08:38.407643 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-08 01:08:38.407667 | orchestrator | Monday 08 September 2025 01:04:44 +0000 (0:00:00.785) 0:05:10.837 ****** 2025-09-08 01:08:38.407675 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-08 01:08:38.407681 | orchestrator | 2025-09-08 01:08:38.407688 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-08 01:08:38.407695 | orchestrator | Monday 08 September 2025 01:04:45 +0000 (0:00:00.848) 0:05:11.686 ****** 2025-09-08 01:08:38.407701 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.407708 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.407714 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.407721 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.407728 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.407734 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.407741 | orchestrator | 2025-09-08 01:08:38.407747 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-08 01:08:38.407754 | orchestrator | Monday 08 September 2025 01:04:46 +0000 (0:00:00.709) 0:05:12.395 ****** 2025-09-08 01:08:38.407761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407907 | orchestrator | 2025-09-08 01:08:38.407914 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-08 01:08:38.407921 | orchestrator | Monday 08 September 2025 01:04:51 +0000 (0:00:04.983) 0:05:17.378 ****** 2025-09-08 01:08:38.407928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.407940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.407947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.407954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.407967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.407974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.407986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.407993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.408063 | orchestrator | 2025-09-08 01:08:38.408070 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-08 01:08:38.408077 | orchestrator | Monday 08 September 2025 01:04:57 +0000 (0:00:06.433) 0:05:23.812 ****** 2025-09-08 01:08:38.408084 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.408090 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.408097 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.408103 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408110 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408117 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408123 | orchestrator | 2025-09-08 01:08:38.408130 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-08 01:08:38.408136 | orchestrator | Monday 08 September 2025 01:04:59 +0000 (0:00:01.864) 0:05:25.677 ****** 2025-09-08 01:08:38.408143 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-08 01:08:38.408150 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-08 01:08:38.408156 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-08 01:08:38.408163 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-08 01:08:38.408170 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-08 01:08:38.408176 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-08 01:08:38.408183 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-08 01:08:38.408189 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408196 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-08 01:08:38.408203 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408209 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-08 01:08:38.408216 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408223 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-08 01:08:38.408229 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-08 01:08:38.408236 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-08 01:08:38.408243 | orchestrator | 2025-09-08 01:08:38.408249 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-08 01:08:38.408256 | orchestrator | Monday 08 September 2025 01:05:04 +0000 (0:00:04.885) 0:05:30.563 ****** 2025-09-08 01:08:38.408262 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.408269 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.408275 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.408282 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408288 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408295 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408302 | orchestrator | 2025-09-08 01:08:38.408308 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-08 01:08:38.408315 | orchestrator | Monday 08 September 2025 01:05:05 +0000 (0:00:00.834) 0:05:31.397 ****** 2025-09-08 01:08:38.408322 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-08 01:08:38.408329 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-08 01:08:38.408339 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-08 01:08:38.408346 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-08 01:08:38.408357 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-08 01:08:38.408364 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-08 01:08:38.408371 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-08 01:08:38.408377 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-08 01:08:38.408384 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-08 01:08:38.408390 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-08 01:08:38.408397 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408404 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-08 01:08:38.408410 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408417 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-08 01:08:38.408423 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408430 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:08:38.408437 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:08:38.408443 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:08:38.408450 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:08:38.408456 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:08:38.408463 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-08 01:08:38.408469 | orchestrator | 2025-09-08 01:08:38.408476 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-08 01:08:38.408483 | orchestrator | Monday 08 September 2025 01:05:10 +0000 (0:00:05.315) 0:05:36.712 ****** 2025-09-08 01:08:38.408490 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 01:08:38.408496 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 01:08:38.408503 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-08 01:08:38.408509 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-08 01:08:38.408516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-08 01:08:38.408523 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-08 01:08:38.408529 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 01:08:38.408536 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 01:08:38.408542 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-08 01:08:38.408549 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 01:08:38.408555 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 01:08:38.408562 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-08 01:08:38.408569 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-08 01:08:38.408580 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408587 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 01:08:38.408593 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-08 01:08:38.408600 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 01:08:38.408607 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408613 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-08 01:08:38.408620 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-08 01:08:38.408626 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408633 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 01:08:38.408640 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 01:08:38.408649 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-08 01:08:38.408656 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 01:08:38.408663 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 01:08:38.408669 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-08 01:08:38.408676 | orchestrator | 2025-09-08 01:08:38.408683 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-08 01:08:38.408689 | orchestrator | Monday 08 September 2025 01:05:19 +0000 (0:00:08.278) 0:05:44.991 ****** 2025-09-08 01:08:38.408696 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.408702 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.408709 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.408715 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408722 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408728 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408735 | orchestrator | 2025-09-08 01:08:38.408741 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-08 01:08:38.408748 | orchestrator | Monday 08 September 2025 01:05:19 +0000 (0:00:00.604) 0:05:45.595 ****** 2025-09-08 01:08:38.408755 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.408761 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.408768 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.408774 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408781 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408787 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408794 | orchestrator | 2025-09-08 01:08:38.408813 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-08 01:08:38.408820 | orchestrator | Monday 08 September 2025 01:05:20 +0000 (0:00:00.836) 0:05:46.432 ****** 2025-09-08 01:08:38.408827 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.408833 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.408840 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.408846 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.408853 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.408860 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.408866 | orchestrator | 2025-09-08 01:08:38.408873 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-08 01:08:38.408880 | orchestrator | Monday 08 September 2025 01:05:22 +0000 (0:00:01.922) 0:05:48.354 ****** 2025-09-08 01:08:38.408887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.408898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.408906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.408913 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.408924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.408931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.408938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.408950 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.408957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-08 01:08:38.408964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-08 01:08:38.408976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.408983 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.408990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.408998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.409013 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.409020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.409027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.409034 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.409041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-08 01:08:38.409051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-08 01:08:38.409058 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.409065 | orchestrator | 2025-09-08 01:08:38.409072 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-08 01:08:38.409079 | orchestrator | Monday 08 September 2025 01:05:24 +0000 (0:00:01.720) 0:05:50.075 ****** 2025-09-08 01:08:38.409085 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-08 01:08:38.409092 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-08 01:08:38.409099 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.409105 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-08 01:08:38.409112 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-08 01:08:38.409119 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.409125 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-08 01:08:38.409132 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-08 01:08:38.409138 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.409145 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-08 01:08:38.409152 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-08 01:08:38.409163 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.409169 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-08 01:08:38.409176 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-08 01:08:38.409183 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.409189 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-08 01:08:38.409196 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-08 01:08:38.409203 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.409209 | orchestrator | 2025-09-08 01:08:38.409216 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-08 01:08:38.409222 | orchestrator | Monday 08 September 2025 01:05:24 +0000 (0:00:00.589) 0:05:50.664 ****** 2025-09-08 01:08:38.409229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-08 01:08:38.409353 | orchestrator | 2025-09-08 01:08:38.409360 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-08 01:08:38.409367 | orchestrator | Monday 08 September 2025 01:05:27 +0000 (0:00:03.153) 0:05:53.817 ****** 2025-09-08 01:08:38.409373 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.409380 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.409387 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.409396 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.409408 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.409414 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.409421 | orchestrator | 2025-09-08 01:08:38.409428 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:08:38.409434 | orchestrator | Monday 08 September 2025 01:05:28 +0000 (0:00:00.541) 0:05:54.358 ****** 2025-09-08 01:08:38.409441 | orchestrator | 2025-09-08 01:08:38.409447 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:08:38.409454 | orchestrator | Monday 08 September 2025 01:05:28 +0000 (0:00:00.121) 0:05:54.480 ****** 2025-09-08 01:08:38.409461 | orchestrator | 2025-09-08 01:08:38.409467 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:08:38.409474 | orchestrator | Monday 08 September 2025 01:05:28 +0000 (0:00:00.120) 0:05:54.600 ****** 2025-09-08 01:08:38.409480 | orchestrator | 2025-09-08 01:08:38.409487 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:08:38.409494 | orchestrator | Monday 08 September 2025 01:05:28 +0000 (0:00:00.238) 0:05:54.839 ****** 2025-09-08 01:08:38.409500 | orchestrator | 2025-09-08 01:08:38.409507 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:08:38.409513 | orchestrator | Monday 08 September 2025 01:05:29 +0000 (0:00:00.121) 0:05:54.961 ****** 2025-09-08 01:08:38.409520 | orchestrator | 2025-09-08 01:08:38.409526 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-08 01:08:38.409533 | orchestrator | Monday 08 September 2025 01:05:29 +0000 (0:00:00.118) 0:05:55.079 ****** 2025-09-08 01:08:38.409540 | orchestrator | 2025-09-08 01:08:38.409546 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-08 01:08:38.409553 | orchestrator | Monday 08 September 2025 01:05:29 +0000 (0:00:00.123) 0:05:55.202 ****** 2025-09-08 01:08:38.409559 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.409566 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.409572 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.409579 | orchestrator | 2025-09-08 01:08:38.409585 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-08 01:08:38.409592 | orchestrator | Monday 08 September 2025 01:05:42 +0000 (0:00:12.956) 0:06:08.158 ****** 2025-09-08 01:08:38.409599 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.409605 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.409612 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.409619 | orchestrator | 2025-09-08 01:08:38.409625 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-08 01:08:38.409632 | orchestrator | Monday 08 September 2025 01:05:59 +0000 (0:00:17.299) 0:06:25.458 ****** 2025-09-08 01:08:38.409638 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.409645 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.409651 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.409658 | orchestrator | 2025-09-08 01:08:38.409665 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-08 01:08:38.409672 | orchestrator | Monday 08 September 2025 01:06:23 +0000 (0:00:24.113) 0:06:49.572 ****** 2025-09-08 01:08:38.409678 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.409685 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.409691 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.409698 | orchestrator | 2025-09-08 01:08:38.409705 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-08 01:08:38.409711 | orchestrator | Monday 08 September 2025 01:06:58 +0000 (0:00:35.214) 0:07:24.786 ****** 2025-09-08 01:08:38.409718 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.409724 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.409731 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.409737 | orchestrator | 2025-09-08 01:08:38.409744 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-08 01:08:38.409751 | orchestrator | Monday 08 September 2025 01:06:59 +0000 (0:00:00.787) 0:07:25.573 ****** 2025-09-08 01:08:38.409763 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.409770 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.409776 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.409783 | orchestrator | 2025-09-08 01:08:38.409789 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-08 01:08:38.409796 | orchestrator | Monday 08 September 2025 01:07:00 +0000 (0:00:01.063) 0:07:26.636 ****** 2025-09-08 01:08:38.409815 | orchestrator | changed: [testbed-node-3] 2025-09-08 01:08:38.409821 | orchestrator | changed: [testbed-node-4] 2025-09-08 01:08:38.409828 | orchestrator | changed: [testbed-node-5] 2025-09-08 01:08:38.409835 | orchestrator | 2025-09-08 01:08:38.409841 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-08 01:08:38.409848 | orchestrator | Monday 08 September 2025 01:07:21 +0000 (0:00:20.845) 0:07:47.482 ****** 2025-09-08 01:08:38.409855 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.409861 | orchestrator | 2025-09-08 01:08:38.409868 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-08 01:08:38.409874 | orchestrator | Monday 08 September 2025 01:07:21 +0000 (0:00:00.157) 0:07:47.640 ****** 2025-09-08 01:08:38.409881 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.409888 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.409894 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.409901 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.409907 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.409914 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-08 01:08:38.409921 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:08:38.409928 | orchestrator | 2025-09-08 01:08:38.409934 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-08 01:08:38.409941 | orchestrator | Monday 08 September 2025 01:07:45 +0000 (0:00:23.850) 0:08:11.490 ****** 2025-09-08 01:08:38.409948 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.409954 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.409961 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.409968 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.409977 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.409984 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.409991 | orchestrator | 2025-09-08 01:08:38.409997 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-08 01:08:38.410004 | orchestrator | Monday 08 September 2025 01:07:56 +0000 (0:00:10.754) 0:08:22.245 ****** 2025-09-08 01:08:38.410010 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.410037 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.410045 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.410052 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.410059 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.410065 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-08 01:08:38.410072 | orchestrator | 2025-09-08 01:08:38.410079 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-08 01:08:38.410085 | orchestrator | Monday 08 September 2025 01:08:01 +0000 (0:00:04.986) 0:08:27.232 ****** 2025-09-08 01:08:38.410092 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:08:38.410099 | orchestrator | 2025-09-08 01:08:38.410105 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-08 01:08:38.410112 | orchestrator | Monday 08 September 2025 01:08:14 +0000 (0:00:12.946) 0:08:40.178 ****** 2025-09-08 01:08:38.410119 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:08:38.410125 | orchestrator | 2025-09-08 01:08:38.410132 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-08 01:08:38.410138 | orchestrator | Monday 08 September 2025 01:08:15 +0000 (0:00:01.422) 0:08:41.601 ****** 2025-09-08 01:08:38.410151 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.410157 | orchestrator | 2025-09-08 01:08:38.410164 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-08 01:08:38.410171 | orchestrator | Monday 08 September 2025 01:08:17 +0000 (0:00:01.313) 0:08:42.914 ****** 2025-09-08 01:08:38.410177 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-08 01:08:38.410184 | orchestrator | 2025-09-08 01:08:38.410190 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-08 01:08:38.410197 | orchestrator | Monday 08 September 2025 01:08:28 +0000 (0:00:11.669) 0:08:54.584 ****** 2025-09-08 01:08:38.410204 | orchestrator | ok: [testbed-node-3] 2025-09-08 01:08:38.410210 | orchestrator | ok: [testbed-node-4] 2025-09-08 01:08:38.410217 | orchestrator | ok: [testbed-node-5] 2025-09-08 01:08:38.410223 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:08:38.410230 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:08:38.410236 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:08:38.410243 | orchestrator | 2025-09-08 01:08:38.410249 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-08 01:08:38.410256 | orchestrator | 2025-09-08 01:08:38.410263 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-08 01:08:38.410269 | orchestrator | Monday 08 September 2025 01:08:30 +0000 (0:00:01.921) 0:08:56.505 ****** 2025-09-08 01:08:38.410276 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:08:38.410283 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:08:38.410289 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:08:38.410296 | orchestrator | 2025-09-08 01:08:38.410303 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-08 01:08:38.410309 | orchestrator | 2025-09-08 01:08:38.410316 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-08 01:08:38.410322 | orchestrator | Monday 08 September 2025 01:08:31 +0000 (0:00:00.938) 0:08:57.444 ****** 2025-09-08 01:08:38.410329 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.410336 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.410342 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.410349 | orchestrator | 2025-09-08 01:08:38.410355 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-08 01:08:38.410362 | orchestrator | 2025-09-08 01:08:38.410369 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-08 01:08:38.410375 | orchestrator | Monday 08 September 2025 01:08:32 +0000 (0:00:00.578) 0:08:58.022 ****** 2025-09-08 01:08:38.410382 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-08 01:08:38.410388 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-08 01:08:38.410395 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-08 01:08:38.410401 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-08 01:08:38.410408 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-08 01:08:38.410415 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-08 01:08:38.410421 | orchestrator | skipping: [testbed-node-3] 2025-09-08 01:08:38.410428 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-08 01:08:38.410434 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-08 01:08:38.410441 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-08 01:08:38.410448 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-08 01:08:38.410454 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-08 01:08:38.410461 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-08 01:08:38.410467 | orchestrator | skipping: [testbed-node-4] 2025-09-08 01:08:38.410474 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-08 01:08:38.410481 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-08 01:08:38.410492 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-08 01:08:38.410499 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-08 01:08:38.410505 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-08 01:08:38.410512 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-08 01:08:38.410518 | orchestrator | skipping: [testbed-node-5] 2025-09-08 01:08:38.410525 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-08 01:08:38.410535 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-08 01:08:38.410542 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-08 01:08:38.410548 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-08 01:08:38.410555 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-08 01:08:38.410562 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-08 01:08:38.410568 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.410575 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-08 01:08:38.410582 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-08 01:08:38.410588 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-08 01:08:38.410595 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-08 01:08:38.410602 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-08 01:08:38.410608 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-08 01:08:38.410615 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.410621 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-08 01:08:38.410628 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-08 01:08:38.410635 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-08 01:08:38.410641 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-08 01:08:38.410648 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-08 01:08:38.410654 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-08 01:08:38.410661 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.410668 | orchestrator | 2025-09-08 01:08:38.410674 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-08 01:08:38.410681 | orchestrator | 2025-09-08 01:08:38.410688 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-08 01:08:38.410694 | orchestrator | Monday 08 September 2025 01:08:33 +0000 (0:00:01.252) 0:08:59.275 ****** 2025-09-08 01:08:38.410701 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-08 01:08:38.410708 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-08 01:08:38.410715 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.410721 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-08 01:08:38.410728 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-08 01:08:38.410735 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.410741 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-08 01:08:38.410748 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-08 01:08:38.410755 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.410761 | orchestrator | 2025-09-08 01:08:38.410768 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-08 01:08:38.410775 | orchestrator | 2025-09-08 01:08:38.410781 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-08 01:08:38.410788 | orchestrator | Monday 08 September 2025 01:08:33 +0000 (0:00:00.519) 0:08:59.794 ****** 2025-09-08 01:08:38.410795 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.410816 | orchestrator | 2025-09-08 01:08:38.410823 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-08 01:08:38.410834 | orchestrator | 2025-09-08 01:08:38.410841 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-08 01:08:38.410847 | orchestrator | Monday 08 September 2025 01:08:34 +0000 (0:00:00.737) 0:09:00.532 ****** 2025-09-08 01:08:38.410854 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:08:38.410861 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:08:38.410867 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:08:38.410874 | orchestrator | 2025-09-08 01:08:38.410881 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:08:38.410887 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:08:38.410895 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-08 01:08:38.410902 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-08 01:08:38.410909 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-08 01:08:38.410915 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-08 01:08:38.410922 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-08 01:08:38.410929 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-08 01:08:38.410936 | orchestrator | 2025-09-08 01:08:38.410942 | orchestrator | 2025-09-08 01:08:38.410949 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:08:38.410956 | orchestrator | Monday 08 September 2025 01:08:35 +0000 (0:00:00.431) 0:09:00.963 ****** 2025-09-08 01:08:38.410963 | orchestrator | =============================================================================== 2025-09-08 01:08:38.410973 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.21s 2025-09-08 01:08:38.410980 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.58s 2025-09-08 01:08:38.410986 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.11s 2025-09-08 01:08:38.410993 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.85s 2025-09-08 01:08:38.410999 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.07s 2025-09-08 01:08:38.411006 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.69s 2025-09-08 01:08:38.411013 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 20.85s 2025-09-08 01:08:38.411019 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.63s 2025-09-08 01:08:38.411026 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.30s 2025-09-08 01:08:38.411033 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.53s 2025-09-08 01:08:38.411039 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.96s 2025-09-08 01:08:38.411046 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.95s 2025-09-08 01:08:38.411052 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.10s 2025-09-08 01:08:38.411059 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.67s 2025-09-08 01:08:38.411066 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.50s 2025-09-08 01:08:38.411072 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.27s 2025-09-08 01:08:38.411079 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.75s 2025-09-08 01:08:38.411090 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.17s 2025-09-08 01:08:38.411097 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.28s 2025-09-08 01:08:38.411103 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 7.86s 2025-09-08 01:08:38.411110 | orchestrator | 2025-09-08 01:08:38 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:38.411117 | orchestrator | 2025-09-08 01:08:38 | INFO  | Task 429ea3c3-ef06-40ac-b740-be73f57d280a is in state SUCCESS 2025-09-08 01:08:38.411123 | orchestrator | 2025-09-08 01:08:38 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:38.411130 | orchestrator | 2025-09-08 01:08:38 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:38.411137 | orchestrator | 2025-09-08 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:41.456439 | orchestrator | 2025-09-08 01:08:41 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:41.457126 | orchestrator | 2025-09-08 01:08:41 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:41.458553 | orchestrator | 2025-09-08 01:08:41 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:41.458663 | orchestrator | 2025-09-08 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:44.501933 | orchestrator | 2025-09-08 01:08:44 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:44.503749 | orchestrator | 2025-09-08 01:08:44 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:44.506924 | orchestrator | 2025-09-08 01:08:44 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:44.506955 | orchestrator | 2025-09-08 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:47.555872 | orchestrator | 2025-09-08 01:08:47 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:47.556297 | orchestrator | 2025-09-08 01:08:47 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:47.557161 | orchestrator | 2025-09-08 01:08:47 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:47.557179 | orchestrator | 2025-09-08 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:50.607412 | orchestrator | 2025-09-08 01:08:50 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:50.609071 | orchestrator | 2025-09-08 01:08:50 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:50.611480 | orchestrator | 2025-09-08 01:08:50 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:50.611505 | orchestrator | 2025-09-08 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:53.656172 | orchestrator | 2025-09-08 01:08:53 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:53.656282 | orchestrator | 2025-09-08 01:08:53 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:53.657513 | orchestrator | 2025-09-08 01:08:53 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:53.657533 | orchestrator | 2025-09-08 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:56.701925 | orchestrator | 2025-09-08 01:08:56 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:56.702519 | orchestrator | 2025-09-08 01:08:56 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:56.705038 | orchestrator | 2025-09-08 01:08:56 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:56.705062 | orchestrator | 2025-09-08 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:08:59.752132 | orchestrator | 2025-09-08 01:08:59 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:08:59.752687 | orchestrator | 2025-09-08 01:08:59 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:08:59.753630 | orchestrator | 2025-09-08 01:08:59 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:08:59.753660 | orchestrator | 2025-09-08 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:02.813614 | orchestrator | 2025-09-08 01:09:02 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:02.814888 | orchestrator | 2025-09-08 01:09:02 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:02.817318 | orchestrator | 2025-09-08 01:09:02 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:02.817342 | orchestrator | 2025-09-08 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:05.869341 | orchestrator | 2025-09-08 01:09:05 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:05.870313 | orchestrator | 2025-09-08 01:09:05 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:05.871984 | orchestrator | 2025-09-08 01:09:05 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:05.872225 | orchestrator | 2025-09-08 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:08.917521 | orchestrator | 2025-09-08 01:09:08 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:08.919669 | orchestrator | 2025-09-08 01:09:08 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:08.922333 | orchestrator | 2025-09-08 01:09:08 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:08.922434 | orchestrator | 2025-09-08 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:11.965109 | orchestrator | 2025-09-08 01:09:11 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:11.967107 | orchestrator | 2025-09-08 01:09:11 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:11.968529 | orchestrator | 2025-09-08 01:09:11 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:11.968560 | orchestrator | 2025-09-08 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:15.014402 | orchestrator | 2025-09-08 01:09:15 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:15.016834 | orchestrator | 2025-09-08 01:09:15 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:15.020191 | orchestrator | 2025-09-08 01:09:15 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:15.020224 | orchestrator | 2025-09-08 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:18.064533 | orchestrator | 2025-09-08 01:09:18 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:18.064640 | orchestrator | 2025-09-08 01:09:18 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:18.065550 | orchestrator | 2025-09-08 01:09:18 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:18.065576 | orchestrator | 2025-09-08 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:21.109742 | orchestrator | 2025-09-08 01:09:21 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:21.112239 | orchestrator | 2025-09-08 01:09:21 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:21.114145 | orchestrator | 2025-09-08 01:09:21 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:21.114371 | orchestrator | 2025-09-08 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:24.167454 | orchestrator | 2025-09-08 01:09:24 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:24.170841 | orchestrator | 2025-09-08 01:09:24 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:24.173953 | orchestrator | 2025-09-08 01:09:24 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:24.174142 | orchestrator | 2025-09-08 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:27.226836 | orchestrator | 2025-09-08 01:09:27 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:27.228584 | orchestrator | 2025-09-08 01:09:27 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:27.230004 | orchestrator | 2025-09-08 01:09:27 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:27.230083 | orchestrator | 2025-09-08 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:30.278278 | orchestrator | 2025-09-08 01:09:30 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:30.280743 | orchestrator | 2025-09-08 01:09:30 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:30.284390 | orchestrator | 2025-09-08 01:09:30 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:30.284486 | orchestrator | 2025-09-08 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:33.321401 | orchestrator | 2025-09-08 01:09:33 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:33.322172 | orchestrator | 2025-09-08 01:09:33 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:33.323142 | orchestrator | 2025-09-08 01:09:33 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:33.324027 | orchestrator | 2025-09-08 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:36.365967 | orchestrator | 2025-09-08 01:09:36 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:36.368252 | orchestrator | 2025-09-08 01:09:36 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state STARTED 2025-09-08 01:09:36.371203 | orchestrator | 2025-09-08 01:09:36 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:36.371231 | orchestrator | 2025-09-08 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:39.422658 | orchestrator | 2025-09-08 01:09:39 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:39.426673 | orchestrator | 2025-09-08 01:09:39 | INFO  | Task 382332c0-ae36-40f8-9308-8da87a67f401 is in state SUCCESS 2025-09-08 01:09:39.429076 | orchestrator | 2025-09-08 01:09:39.429117 | orchestrator | 2025-09-08 01:09:39.429130 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:09:39.429143 | orchestrator | 2025-09-08 01:09:39.429155 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:09:39.429166 | orchestrator | Monday 08 September 2025 01:08:36 +0000 (0:00:00.160) 0:00:00.160 ****** 2025-09-08 01:09:39.429177 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:39.429190 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:39.429278 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:39.429293 | orchestrator | 2025-09-08 01:09:39.429305 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:09:39.429316 | orchestrator | Monday 08 September 2025 01:08:37 +0000 (0:00:00.257) 0:00:00.418 ****** 2025-09-08 01:09:39.429328 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-08 01:09:39.430245 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-08 01:09:39.430394 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-08 01:09:39.430411 | orchestrator | 2025-09-08 01:09:39.430425 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-08 01:09:39.430437 | orchestrator | 2025-09-08 01:09:39.430449 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-08 01:09:39.430461 | orchestrator | Monday 08 September 2025 01:08:37 +0000 (0:00:00.525) 0:00:00.944 ****** 2025-09-08 01:09:39.430472 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:39.430483 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:39.430494 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:39.430505 | orchestrator | 2025-09-08 01:09:39.430516 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:09:39.430528 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:09:39.430541 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:09:39.430552 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:09:39.430563 | orchestrator | 2025-09-08 01:09:39.430573 | orchestrator | 2025-09-08 01:09:39.430584 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:09:39.430595 | orchestrator | Monday 08 September 2025 01:08:38 +0000 (0:00:00.647) 0:00:01.592 ****** 2025-09-08 01:09:39.430606 | orchestrator | =============================================================================== 2025-09-08 01:09:39.430617 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.65s 2025-09-08 01:09:39.430628 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-09-08 01:09:39.430639 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-08 01:09:39.430649 | orchestrator | 2025-09-08 01:09:39.430660 | orchestrator | 2025-09-08 01:09:39.430671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:09:39.430682 | orchestrator | 2025-09-08 01:09:39.430692 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:09:39.430703 | orchestrator | Monday 08 September 2025 01:07:11 +0000 (0:00:00.261) 0:00:00.261 ****** 2025-09-08 01:09:39.430714 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:39.430725 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:39.430736 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:39.430746 | orchestrator | 2025-09-08 01:09:39.430757 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:09:39.430768 | orchestrator | Monday 08 September 2025 01:07:11 +0000 (0:00:00.291) 0:00:00.552 ****** 2025-09-08 01:09:39.430778 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-08 01:09:39.430790 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-08 01:09:39.430934 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-08 01:09:39.430947 | orchestrator | 2025-09-08 01:09:39.430958 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-08 01:09:39.430969 | orchestrator | 2025-09-08 01:09:39.430981 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-08 01:09:39.430992 | orchestrator | Monday 08 September 2025 01:07:12 +0000 (0:00:00.428) 0:00:00.981 ****** 2025-09-08 01:09:39.431003 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:39.431014 | orchestrator | 2025-09-08 01:09:39.431025 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-08 01:09:39.431036 | orchestrator | Monday 08 September 2025 01:07:12 +0000 (0:00:00.573) 0:00:01.555 ****** 2025-09-08 01:09:39.431052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431242 | orchestrator | 2025-09-08 01:09:39.431254 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-08 01:09:39.431265 | orchestrator | Monday 08 September 2025 01:07:13 +0000 (0:00:00.795) 0:00:02.351 ****** 2025-09-08 01:09:39.431276 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-08 01:09:39.431287 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-08 01:09:39.431298 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:09:39.431309 | orchestrator | 2025-09-08 01:09:39.431320 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-08 01:09:39.431331 | orchestrator | Monday 08 September 2025 01:07:14 +0000 (0:00:00.825) 0:00:03.177 ****** 2025-09-08 01:09:39.431342 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:09:39.431353 | orchestrator | 2025-09-08 01:09:39.431363 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-08 01:09:39.431385 | orchestrator | Monday 08 September 2025 01:07:15 +0000 (0:00:00.739) 0:00:03.916 ****** 2025-09-08 01:09:39.431397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431432 | orchestrator | 2025-09-08 01:09:39.431480 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-08 01:09:39.431494 | orchestrator | Monday 08 September 2025 01:07:16 +0000 (0:00:01.458) 0:00:05.375 ****** 2025-09-08 01:09:39.431505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:39.431518 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:39.431530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:39.431541 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:39.431552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:39.431570 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:39.431581 | orchestrator | 2025-09-08 01:09:39.431592 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-08 01:09:39.431603 | orchestrator | Monday 08 September 2025 01:07:17 +0000 (0:00:00.382) 0:00:05.758 ****** 2025-09-08 01:09:39.431614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:39.431626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:39.431638 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:39.431648 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:39.431691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-08 01:09:39.431704 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:39.431715 | orchestrator | 2025-09-08 01:09:39.431726 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-08 01:09:39.431737 | orchestrator | Monday 08 September 2025 01:07:18 +0000 (0:00:00.873) 0:00:06.631 ****** 2025-09-08 01:09:39.431748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431790 | orchestrator | 2025-09-08 01:09:39.431801 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-08 01:09:39.431812 | orchestrator | Monday 08 September 2025 01:07:19 +0000 (0:00:01.303) 0:00:07.935 ****** 2025-09-08 01:09:39.431823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.431911 | orchestrator | 2025-09-08 01:09:39.431930 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-08 01:09:39.431941 | orchestrator | Monday 08 September 2025 01:07:20 +0000 (0:00:01.491) 0:00:09.426 ****** 2025-09-08 01:09:39.431952 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:39.431963 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:39.431974 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:39.431984 | orchestrator | 2025-09-08 01:09:39.431995 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-08 01:09:39.432006 | orchestrator | Monday 08 September 2025 01:07:21 +0000 (0:00:00.482) 0:00:09.908 ****** 2025-09-08 01:09:39.432017 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-08 01:09:39.432027 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-08 01:09:39.432038 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-08 01:09:39.432049 | orchestrator | 2025-09-08 01:09:39.432060 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-08 01:09:39.432070 | orchestrator | Monday 08 September 2025 01:07:23 +0000 (0:00:01.727) 0:00:11.636 ****** 2025-09-08 01:09:39.432081 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-08 01:09:39.432093 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-08 01:09:39.432104 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-08 01:09:39.432114 | orchestrator | 2025-09-08 01:09:39.432125 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-08 01:09:39.432136 | orchestrator | Monday 08 September 2025 01:07:25 +0000 (0:00:02.256) 0:00:13.892 ****** 2025-09-08 01:09:39.432147 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-08 01:09:39.432158 | orchestrator | 2025-09-08 01:09:39.432169 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-08 01:09:39.432179 | orchestrator | Monday 08 September 2025 01:07:27 +0000 (0:00:02.154) 0:00:16.046 ****** 2025-09-08 01:09:39.432190 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-08 01:09:39.432201 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-08 01:09:39.432212 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:39.432223 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:09:39.432234 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:09:39.432245 | orchestrator | 2025-09-08 01:09:39.432256 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-08 01:09:39.432266 | orchestrator | Monday 08 September 2025 01:07:28 +0000 (0:00:00.998) 0:00:17.045 ****** 2025-09-08 01:09:39.432277 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:39.432288 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:39.432299 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:39.432310 | orchestrator | 2025-09-08 01:09:39.432321 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-08 01:09:39.432332 | orchestrator | Monday 08 September 2025 01:07:28 +0000 (0:00:00.523) 0:00:17.569 ****** 2025-09-08 01:09:39.432344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096561, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9410174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096561, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9410174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096561, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9410174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096623, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9631186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096623, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9631186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096623, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9631186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096572, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9433653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096572, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9433653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096572, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9433653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096624, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9653344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096624, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9653344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096624, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9653344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096586, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9476342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096586, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9476342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096586, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9476342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096613, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9614236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096613, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9614236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096613, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9614236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096526, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9349818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096526, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9349818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096526, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9349818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096566, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.941712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096566, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.941712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096566, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.941712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096575, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.943916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096575, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.943916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096575, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.943916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096593, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9534101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096593, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9534101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096593, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9534101, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096620, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.962856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096620, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.962856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096620, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.962856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096569, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.942509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.432992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096569, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.942509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096608, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.960975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096569, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.942509, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096608, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.960975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096589, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9492378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096589, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9492378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096608, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.960975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096585, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9465318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096585, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9465318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096589, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9492378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096581, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9458888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096581, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9458888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096585, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9465318, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096596, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9596364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096596, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9596364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096581, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9458888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096577, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9452617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096577, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9452617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096596, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9596364, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096618, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.962608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096618, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.962608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096577, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9452617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096739, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9958546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096739, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9958546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096618, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.962608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096658, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9766898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096658, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9766898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096739, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9958546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096644, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9675405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096644, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9675405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096658, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9766898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096684, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9795833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096684, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9795833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096644, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9675405, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096635, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.965823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096635, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.965823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096684, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9795833, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096711, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9885402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096711, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9885402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096635, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.965823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096689, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9854102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096689, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9854102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096711, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9885402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096712, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9890862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096712, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9890862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096689, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9854102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096731, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9949508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096731, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9949508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096712, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9890862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096710, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9874103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096710, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9874103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096679, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.978547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096731, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9949508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096679, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.978547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096655, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9694102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096710, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9874103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096655, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9694102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096676, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.978037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096679, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.978547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096676, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.978037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096646, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9691586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096646, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9691586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096655, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9694102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096682, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9788108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096682, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9788108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096676, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.978037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096727, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9934103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096727, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9934103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096646, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9691586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.433997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096721, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9915004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096721, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9915004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096682, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9788108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096638, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9660542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096638, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9660542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096727, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9934103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096640, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9668033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096640, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9668033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096721, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9915004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096707, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9864104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096707, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9864104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096638, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9660542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096717, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9904275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096717, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9904275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096640, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9668033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096707, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9864104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096717, 'dev': 161, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1757290574.9904275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-08 01:09:39.434382 | orchestrator | 2025-09-08 01:09:39.434393 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-08 01:09:39.434404 | orchestrator | Monday 08 September 2025 01:08:10 +0000 (0:00:41.130) 0:00:58.699 ****** 2025-09-08 01:09:39.434421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.434451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.434462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-08 01:09:39.434472 | orchestrator | 2025-09-08 01:09:39.434483 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-08 01:09:39.434492 | orchestrator | Monday 08 September 2025 01:08:11 +0000 (0:00:00.926) 0:00:59.626 ****** 2025-09-08 01:09:39.434502 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:39.434512 | orchestrator | 2025-09-08 01:09:39.434522 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-08 01:09:39.434532 | orchestrator | Monday 08 September 2025 01:08:13 +0000 (0:00:02.303) 0:01:01.930 ****** 2025-09-08 01:09:39.434541 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:39.434551 | orchestrator | 2025-09-08 01:09:39.434561 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-08 01:09:39.434570 | orchestrator | Monday 08 September 2025 01:08:15 +0000 (0:00:02.206) 0:01:04.136 ****** 2025-09-08 01:09:39.434580 | orchestrator | 2025-09-08 01:09:39.434589 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-08 01:09:39.434599 | orchestrator | Monday 08 September 2025 01:08:15 +0000 (0:00:00.244) 0:01:04.380 ****** 2025-09-08 01:09:39.434609 | orchestrator | 2025-09-08 01:09:39.434618 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-08 01:09:39.434628 | orchestrator | Monday 08 September 2025 01:08:15 +0000 (0:00:00.066) 0:01:04.446 ****** 2025-09-08 01:09:39.434637 | orchestrator | 2025-09-08 01:09:39.434647 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-08 01:09:39.434656 | orchestrator | Monday 08 September 2025 01:08:15 +0000 (0:00:00.065) 0:01:04.512 ****** 2025-09-08 01:09:39.434666 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:39.434676 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:39.434685 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:09:39.434695 | orchestrator | 2025-09-08 01:09:39.434704 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-08 01:09:39.434714 | orchestrator | Monday 08 September 2025 01:08:17 +0000 (0:00:01.890) 0:01:06.402 ****** 2025-09-08 01:09:39.434724 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:39.434733 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:39.434743 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-08 01:09:39.434753 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-08 01:09:39.434762 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-08 01:09:39.434778 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:39.434788 | orchestrator | 2025-09-08 01:09:39.434798 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-08 01:09:39.434808 | orchestrator | Monday 08 September 2025 01:08:56 +0000 (0:00:38.786) 0:01:45.189 ****** 2025-09-08 01:09:39.434817 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:39.434827 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:09:39.434837 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:09:39.434846 | orchestrator | 2025-09-08 01:09:39.434856 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-08 01:09:39.434865 | orchestrator | Monday 08 September 2025 01:09:30 +0000 (0:00:34.240) 0:02:19.430 ****** 2025-09-08 01:09:39.434875 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:09:39.434903 | orchestrator | 2025-09-08 01:09:39.434913 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-08 01:09:39.434923 | orchestrator | Monday 08 September 2025 01:09:33 +0000 (0:00:02.384) 0:02:21.815 ****** 2025-09-08 01:09:39.434937 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:39.434947 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:09:39.434957 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:09:39.434967 | orchestrator | 2025-09-08 01:09:39.434976 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-08 01:09:39.434986 | orchestrator | Monday 08 September 2025 01:09:33 +0000 (0:00:00.624) 0:02:22.440 ****** 2025-09-08 01:09:39.434997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-08 01:09:39.435013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-08 01:09:39.435024 | orchestrator | 2025-09-08 01:09:39.435034 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-08 01:09:39.435044 | orchestrator | Monday 08 September 2025 01:09:36 +0000 (0:00:02.424) 0:02:24.865 ****** 2025-09-08 01:09:39.435053 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:09:39.435063 | orchestrator | 2025-09-08 01:09:39.435072 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:09:39.435082 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:09:39.435092 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:09:39.435102 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:09:39.435112 | orchestrator | 2025-09-08 01:09:39.435121 | orchestrator | 2025-09-08 01:09:39.435131 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:09:39.435141 | orchestrator | Monday 08 September 2025 01:09:36 +0000 (0:00:00.274) 0:02:25.139 ****** 2025-09-08 01:09:39.435150 | orchestrator | =============================================================================== 2025-09-08 01:09:39.435160 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.13s 2025-09-08 01:09:39.435170 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.79s 2025-09-08 01:09:39.435179 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.24s 2025-09-08 01:09:39.435189 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.42s 2025-09-08 01:09:39.435206 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.38s 2025-09-08 01:09:39.435215 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.30s 2025-09-08 01:09:39.435225 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 2.26s 2025-09-08 01:09:39.435235 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.21s 2025-09-08 01:09:39.435244 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 2.15s 2025-09-08 01:09:39.435254 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.89s 2025-09-08 01:09:39.435264 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.73s 2025-09-08 01:09:39.435273 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.49s 2025-09-08 01:09:39.435283 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2025-09-08 01:09:39.435292 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2025-09-08 01:09:39.435302 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.00s 2025-09-08 01:09:39.435311 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.93s 2025-09-08 01:09:39.435321 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.87s 2025-09-08 01:09:39.435331 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.83s 2025-09-08 01:09:39.435340 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2025-09-08 01:09:39.435350 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2025-09-08 01:09:39.435360 | orchestrator | 2025-09-08 01:09:39 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:39.435370 | orchestrator | 2025-09-08 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:42.468248 | orchestrator | 2025-09-08 01:09:42 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:42.468595 | orchestrator | 2025-09-08 01:09:42 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:42.468712 | orchestrator | 2025-09-08 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:45.515838 | orchestrator | 2025-09-08 01:09:45 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:45.516576 | orchestrator | 2025-09-08 01:09:45 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:45.516611 | orchestrator | 2025-09-08 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:48.569825 | orchestrator | 2025-09-08 01:09:48 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:48.570577 | orchestrator | 2025-09-08 01:09:48 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:48.570611 | orchestrator | 2025-09-08 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:51.618973 | orchestrator | 2025-09-08 01:09:51 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:51.623215 | orchestrator | 2025-09-08 01:09:51 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:51.623249 | orchestrator | 2025-09-08 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:54.677596 | orchestrator | 2025-09-08 01:09:54 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:54.681252 | orchestrator | 2025-09-08 01:09:54 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:54.681291 | orchestrator | 2025-09-08 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:09:57.726691 | orchestrator | 2025-09-08 01:09:57 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:09:57.728288 | orchestrator | 2025-09-08 01:09:57 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:09:57.728452 | orchestrator | 2025-09-08 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:00.780425 | orchestrator | 2025-09-08 01:10:00 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:00.785134 | orchestrator | 2025-09-08 01:10:00 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:00.785229 | orchestrator | 2025-09-08 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:03.825180 | orchestrator | 2025-09-08 01:10:03 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:03.826320 | orchestrator | 2025-09-08 01:10:03 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:03.826349 | orchestrator | 2025-09-08 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:06.877465 | orchestrator | 2025-09-08 01:10:06 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:06.879408 | orchestrator | 2025-09-08 01:10:06 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:06.879964 | orchestrator | 2025-09-08 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:09.926627 | orchestrator | 2025-09-08 01:10:09 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:09.927761 | orchestrator | 2025-09-08 01:10:09 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:09.927878 | orchestrator | 2025-09-08 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:12.978617 | orchestrator | 2025-09-08 01:10:12 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:12.980852 | orchestrator | 2025-09-08 01:10:12 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:12.981254 | orchestrator | 2025-09-08 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:16.031835 | orchestrator | 2025-09-08 01:10:16 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:16.032824 | orchestrator | 2025-09-08 01:10:16 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:16.033860 | orchestrator | 2025-09-08 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:19.080653 | orchestrator | 2025-09-08 01:10:19 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:19.083284 | orchestrator | 2025-09-08 01:10:19 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:19.083308 | orchestrator | 2025-09-08 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:22.124571 | orchestrator | 2025-09-08 01:10:22 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:22.125930 | orchestrator | 2025-09-08 01:10:22 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:22.126151 | orchestrator | 2025-09-08 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:25.170267 | orchestrator | 2025-09-08 01:10:25 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:25.170850 | orchestrator | 2025-09-08 01:10:25 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:25.170914 | orchestrator | 2025-09-08 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:28.216422 | orchestrator | 2025-09-08 01:10:28 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:28.218353 | orchestrator | 2025-09-08 01:10:28 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:28.218388 | orchestrator | 2025-09-08 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:31.261765 | orchestrator | 2025-09-08 01:10:31 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:31.263644 | orchestrator | 2025-09-08 01:10:31 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:31.263917 | orchestrator | 2025-09-08 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:34.308425 | orchestrator | 2025-09-08 01:10:34 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:34.308572 | orchestrator | 2025-09-08 01:10:34 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:34.308597 | orchestrator | 2025-09-08 01:10:34 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:37.361868 | orchestrator | 2025-09-08 01:10:37 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:37.363817 | orchestrator | 2025-09-08 01:10:37 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:37.365546 | orchestrator | 2025-09-08 01:10:37 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:40.422126 | orchestrator | 2025-09-08 01:10:40 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:40.424512 | orchestrator | 2025-09-08 01:10:40 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:40.424561 | orchestrator | 2025-09-08 01:10:40 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:43.468408 | orchestrator | 2025-09-08 01:10:43 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:43.468946 | orchestrator | 2025-09-08 01:10:43 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:43.469205 | orchestrator | 2025-09-08 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:46.517257 | orchestrator | 2025-09-08 01:10:46 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state STARTED 2025-09-08 01:10:46.520504 | orchestrator | 2025-09-08 01:10:46 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:46.520812 | orchestrator | 2025-09-08 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:49.553744 | orchestrator | 2025-09-08 01:10:49 | INFO  | Task 78667c52-6ac4-480f-98f1-eaf0b0c86a4c is in state SUCCESS 2025-09-08 01:10:49.554549 | orchestrator | 2025-09-08 01:10:49 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:49.554671 | orchestrator | 2025-09-08 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:52.601720 | orchestrator | 2025-09-08 01:10:52 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:52.601833 | orchestrator | 2025-09-08 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:55.655295 | orchestrator | 2025-09-08 01:10:55 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:55.655422 | orchestrator | 2025-09-08 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:10:58.701519 | orchestrator | 2025-09-08 01:10:58 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:10:58.701631 | orchestrator | 2025-09-08 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:01.750746 | orchestrator | 2025-09-08 01:11:01 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:01.750881 | orchestrator | 2025-09-08 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:04.798167 | orchestrator | 2025-09-08 01:11:04 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:04.798257 | orchestrator | 2025-09-08 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:07.850779 | orchestrator | 2025-09-08 01:11:07 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:07.850912 | orchestrator | 2025-09-08 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:10.905167 | orchestrator | 2025-09-08 01:11:10 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:10.905286 | orchestrator | 2025-09-08 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:13.950159 | orchestrator | 2025-09-08 01:11:13 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:13.950283 | orchestrator | 2025-09-08 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:16.988573 | orchestrator | 2025-09-08 01:11:16 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:16.988691 | orchestrator | 2025-09-08 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:20.039446 | orchestrator | 2025-09-08 01:11:20 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:20.039574 | orchestrator | 2025-09-08 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:23.088509 | orchestrator | 2025-09-08 01:11:23 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:23.088632 | orchestrator | 2025-09-08 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:26.122979 | orchestrator | 2025-09-08 01:11:26 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:26.123152 | orchestrator | 2025-09-08 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:29.169873 | orchestrator | 2025-09-08 01:11:29 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:29.170001 | orchestrator | 2025-09-08 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:32.211574 | orchestrator | 2025-09-08 01:11:32 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:32.211671 | orchestrator | 2025-09-08 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:35.251106 | orchestrator | 2025-09-08 01:11:35 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:35.251222 | orchestrator | 2025-09-08 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:38.290892 | orchestrator | 2025-09-08 01:11:38 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:38.291050 | orchestrator | 2025-09-08 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:41.339421 | orchestrator | 2025-09-08 01:11:41 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:41.339526 | orchestrator | 2025-09-08 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:44.389514 | orchestrator | 2025-09-08 01:11:44 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:44.389624 | orchestrator | 2025-09-08 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:47.439015 | orchestrator | 2025-09-08 01:11:47 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:47.439211 | orchestrator | 2025-09-08 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:50.484586 | orchestrator | 2025-09-08 01:11:50 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:50.484689 | orchestrator | 2025-09-08 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:53.528558 | orchestrator | 2025-09-08 01:11:53 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:53.528668 | orchestrator | 2025-09-08 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:56.563774 | orchestrator | 2025-09-08 01:11:56 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:56.563880 | orchestrator | 2025-09-08 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:11:59.610163 | orchestrator | 2025-09-08 01:11:59 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:11:59.610267 | orchestrator | 2025-09-08 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:02.653794 | orchestrator | 2025-09-08 01:12:02 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:02.653916 | orchestrator | 2025-09-08 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:05.698242 | orchestrator | 2025-09-08 01:12:05 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:05.698340 | orchestrator | 2025-09-08 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:08.742593 | orchestrator | 2025-09-08 01:12:08 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:08.742700 | orchestrator | 2025-09-08 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:11.789209 | orchestrator | 2025-09-08 01:12:11 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:11.789310 | orchestrator | 2025-09-08 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:14.848143 | orchestrator | 2025-09-08 01:12:14 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:14.848241 | orchestrator | 2025-09-08 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:17.895676 | orchestrator | 2025-09-08 01:12:17 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:17.895774 | orchestrator | 2025-09-08 01:12:17 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:20.939338 | orchestrator | 2025-09-08 01:12:20 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:20.939445 | orchestrator | 2025-09-08 01:12:20 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:23.979699 | orchestrator | 2025-09-08 01:12:23 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:23.979797 | orchestrator | 2025-09-08 01:12:23 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:27.023486 | orchestrator | 2025-09-08 01:12:27 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:27.023597 | orchestrator | 2025-09-08 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:30.074386 | orchestrator | 2025-09-08 01:12:30 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:30.074490 | orchestrator | 2025-09-08 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:33.117283 | orchestrator | 2025-09-08 01:12:33 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:33.117381 | orchestrator | 2025-09-08 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:36.156648 | orchestrator | 2025-09-08 01:12:36 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:36.156752 | orchestrator | 2025-09-08 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:39.207767 | orchestrator | 2025-09-08 01:12:39 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:39.207873 | orchestrator | 2025-09-08 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:42.260758 | orchestrator | 2025-09-08 01:12:42 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:42.260865 | orchestrator | 2025-09-08 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:45.314890 | orchestrator | 2025-09-08 01:12:45 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:45.314982 | orchestrator | 2025-09-08 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:48.365066 | orchestrator | 2025-09-08 01:12:48 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:48.365239 | orchestrator | 2025-09-08 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:51.414875 | orchestrator | 2025-09-08 01:12:51 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:51.414989 | orchestrator | 2025-09-08 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:54.462620 | orchestrator | 2025-09-08 01:12:54 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:54.462726 | orchestrator | 2025-09-08 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:12:57.506420 | orchestrator | 2025-09-08 01:12:57 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:12:57.506531 | orchestrator | 2025-09-08 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:00.559019 | orchestrator | 2025-09-08 01:13:00 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:13:00.559115 | orchestrator | 2025-09-08 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:03.606271 | orchestrator | 2025-09-08 01:13:03 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:13:03.606381 | orchestrator | 2025-09-08 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:06.651223 | orchestrator | 2025-09-08 01:13:06 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:13:06.651333 | orchestrator | 2025-09-08 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:09.702613 | orchestrator | 2025-09-08 01:13:09 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:13:09.702722 | orchestrator | 2025-09-08 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:12.749041 | orchestrator | 2025-09-08 01:13:12 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:13:12.749187 | orchestrator | 2025-09-08 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:15.796704 | orchestrator | 2025-09-08 01:13:15 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:13:15.796817 | orchestrator | 2025-09-08 01:13:15 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:18.841360 | orchestrator | 2025-09-08 01:13:18 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state STARTED 2025-09-08 01:13:18.841473 | orchestrator | 2025-09-08 01:13:18 | INFO  | Wait 1 second(s) until the next check 2025-09-08 01:13:21.887357 | orchestrator | 2025-09-08 01:13:21.887448 | orchestrator | 2025-09-08 01:13:21.887460 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-08 01:13:21.887468 | orchestrator | 2025-09-08 01:13:21.887476 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-08 01:13:21.887485 | orchestrator | Monday 08 September 2025 01:04:30 +0000 (0:00:00.198) 0:00:00.198 ****** 2025-09-08 01:13:21.887492 | orchestrator | changed: [localhost] 2025-09-08 01:13:21.887501 | orchestrator | 2025-09-08 01:13:21.887509 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-08 01:13:21.887516 | orchestrator | Monday 08 September 2025 01:04:32 +0000 (0:00:01.630) 0:00:01.828 ****** 2025-09-08 01:13:21.887524 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-08 01:13:21.887532 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-09-08 01:13:21.887539 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2025-09-08 01:13:21.887546 | orchestrator | 2025-09-08 01:13:21.887554 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887561 | orchestrator | 2025-09-08 01:13:21.887568 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887575 | orchestrator | 2025-09-08 01:13:21.887582 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887589 | orchestrator | 2025-09-08 01:13:21.887596 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887603 | orchestrator | 2025-09-08 01:13:21.887610 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887617 | orchestrator | 2025-09-08 01:13:21.887625 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887632 | orchestrator | 2025-09-08 01:13:21.887639 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887646 | orchestrator | 2025-09-08 01:13:21.887654 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-08 01:13:21.887661 | orchestrator | changed: [localhost] 2025-09-08 01:13:21.887669 | orchestrator | 2025-09-08 01:13:21.887676 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-08 01:13:21.887683 | orchestrator | Monday 08 September 2025 01:10:38 +0000 (0:06:06.157) 0:06:07.985 ****** 2025-09-08 01:13:21.887690 | orchestrator | changed: [localhost] 2025-09-08 01:13:21.887697 | orchestrator | 2025-09-08 01:13:21.887704 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:13:21.887711 | orchestrator | 2025-09-08 01:13:21.887719 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:13:21.887726 | orchestrator | Monday 08 September 2025 01:10:48 +0000 (0:00:09.325) 0:06:17.310 ****** 2025-09-08 01:13:21.887733 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.887740 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:21.887747 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:21.887754 | orchestrator | 2025-09-08 01:13:21.887761 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:13:21.887769 | orchestrator | Monday 08 September 2025 01:10:48 +0000 (0:00:00.316) 0:06:17.627 ****** 2025-09-08 01:13:21.887776 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-08 01:13:21.887803 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-08 01:13:21.887812 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-08 01:13:21.887819 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-08 01:13:21.887826 | orchestrator | 2025-09-08 01:13:21.887833 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-08 01:13:21.887841 | orchestrator | skipping: no hosts matched 2025-09-08 01:13:21.887849 | orchestrator | 2025-09-08 01:13:21.887856 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:13:21.887863 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:21.887872 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:21.887880 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:21.887887 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:13:21.887896 | orchestrator | 2025-09-08 01:13:21.887905 | orchestrator | 2025-09-08 01:13:21.887914 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:13:21.887934 | orchestrator | Monday 08 September 2025 01:10:49 +0000 (0:00:00.672) 0:06:18.300 ****** 2025-09-08 01:13:21.887943 | orchestrator | =============================================================================== 2025-09-08 01:13:21.887952 | orchestrator | Download ironic-agent initramfs --------------------------------------- 366.16s 2025-09-08 01:13:21.887960 | orchestrator | Download ironic-agent kernel -------------------------------------------- 9.32s 2025-09-08 01:13:21.887969 | orchestrator | Ensure the destination directory exists --------------------------------- 1.63s 2025-09-08 01:13:21.887978 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-09-08 01:13:21.887986 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-08 01:13:21.887995 | orchestrator | 2025-09-08 01:13:21.888003 | orchestrator | 2025-09-08 01:13:21.888011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-08 01:13:21.888020 | orchestrator | 2025-09-08 01:13:21.888028 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-08 01:13:21.888037 | orchestrator | Monday 08 September 2025 01:08:39 +0000 (0:00:00.238) 0:00:00.238 ****** 2025-09-08 01:13:21.888045 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.888066 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:21.888075 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:21.888083 | orchestrator | 2025-09-08 01:13:21.888092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-08 01:13:21.888141 | orchestrator | Monday 08 September 2025 01:08:39 +0000 (0:00:00.275) 0:00:00.513 ****** 2025-09-08 01:13:21.888151 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-08 01:13:21.888160 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-08 01:13:21.888168 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-08 01:13:21.888177 | orchestrator | 2025-09-08 01:13:21.888185 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-08 01:13:21.888193 | orchestrator | 2025-09-08 01:13:21.888202 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:21.888211 | orchestrator | Monday 08 September 2025 01:08:39 +0000 (0:00:00.438) 0:00:00.951 ****** 2025-09-08 01:13:21.888219 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:21.888228 | orchestrator | 2025-09-08 01:13:21.888238 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-08 01:13:21.888246 | orchestrator | Monday 08 September 2025 01:08:40 +0000 (0:00:00.520) 0:00:01.472 ****** 2025-09-08 01:13:21.888260 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-08 01:13:21.888268 | orchestrator | 2025-09-08 01:13:21.888275 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-08 01:13:21.888282 | orchestrator | Monday 08 September 2025 01:08:43 +0000 (0:00:03.540) 0:00:05.012 ****** 2025-09-08 01:13:21.888289 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-08 01:13:21.888297 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-08 01:13:21.888304 | orchestrator | 2025-09-08 01:13:21.888311 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-08 01:13:21.888318 | orchestrator | Monday 08 September 2025 01:08:50 +0000 (0:00:06.529) 0:00:11.542 ****** 2025-09-08 01:13:21.888326 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-08 01:13:21.888333 | orchestrator | 2025-09-08 01:13:21.888340 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-08 01:13:21.888347 | orchestrator | Monday 08 September 2025 01:08:53 +0000 (0:00:03.249) 0:00:14.792 ****** 2025-09-08 01:13:21.888355 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-08 01:13:21.888362 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-08 01:13:21.888369 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-08 01:13:21.888376 | orchestrator | 2025-09-08 01:13:21.888384 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-08 01:13:21.888391 | orchestrator | Monday 08 September 2025 01:09:01 +0000 (0:00:07.956) 0:00:22.748 ****** 2025-09-08 01:13:21.888398 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-08 01:13:21.888406 | orchestrator | 2025-09-08 01:13:21.888413 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-08 01:13:21.888420 | orchestrator | Monday 08 September 2025 01:09:04 +0000 (0:00:03.301) 0:00:26.050 ****** 2025-09-08 01:13:21.888427 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-08 01:13:21.888434 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-08 01:13:21.888442 | orchestrator | 2025-09-08 01:13:21.888449 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-08 01:13:21.888456 | orchestrator | Monday 08 September 2025 01:09:12 +0000 (0:00:07.490) 0:00:33.540 ****** 2025-09-08 01:13:21.888463 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-08 01:13:21.888470 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-08 01:13:21.888477 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-08 01:13:21.888485 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-08 01:13:21.888492 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-08 01:13:21.888499 | orchestrator | 2025-09-08 01:13:21.888506 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:21.888513 | orchestrator | Monday 08 September 2025 01:09:27 +0000 (0:00:14.982) 0:00:48.523 ****** 2025-09-08 01:13:21.888521 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:21.888528 | orchestrator | 2025-09-08 01:13:21.888540 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-08 01:13:21.888547 | orchestrator | Monday 08 September 2025 01:09:27 +0000 (0:00:00.573) 0:00:49.096 ****** 2025-09-08 01:13:21.888554 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.888561 | orchestrator | 2025-09-08 01:13:21.888569 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-08 01:13:21.888576 | orchestrator | Monday 08 September 2025 01:09:33 +0000 (0:00:05.126) 0:00:54.223 ****** 2025-09-08 01:13:21.888583 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.888595 | orchestrator | 2025-09-08 01:13:21.888603 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-08 01:13:21.888610 | orchestrator | Monday 08 September 2025 01:09:37 +0000 (0:00:04.417) 0:00:58.640 ****** 2025-09-08 01:13:21.888617 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.888624 | orchestrator | 2025-09-08 01:13:21.888631 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-08 01:13:21.888639 | orchestrator | Monday 08 September 2025 01:09:40 +0000 (0:00:03.223) 0:01:01.864 ****** 2025-09-08 01:13:21.888646 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-08 01:13:21.888667 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-08 01:13:21.888674 | orchestrator | 2025-09-08 01:13:21.888682 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-08 01:13:21.888689 | orchestrator | Monday 08 September 2025 01:09:51 +0000 (0:00:10.832) 0:01:12.697 ****** 2025-09-08 01:13:21.888696 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-08 01:13:21.888704 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-08 01:13:21.888711 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-08 01:13:21.888719 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-08 01:13:21.888726 | orchestrator | 2025-09-08 01:13:21.888733 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-08 01:13:21.888740 | orchestrator | Monday 08 September 2025 01:10:07 +0000 (0:00:16.356) 0:01:29.053 ****** 2025-09-08 01:13:21.888748 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.888755 | orchestrator | 2025-09-08 01:13:21.888762 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-08 01:13:21.888770 | orchestrator | Monday 08 September 2025 01:10:12 +0000 (0:00:04.445) 0:01:33.498 ****** 2025-09-08 01:13:21.888777 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.888784 | orchestrator | 2025-09-08 01:13:21.888791 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-08 01:13:21.888799 | orchestrator | Monday 08 September 2025 01:10:17 +0000 (0:00:05.573) 0:01:39.071 ****** 2025-09-08 01:13:21.888806 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:21.888813 | orchestrator | 2025-09-08 01:13:21.888821 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-08 01:13:21.888828 | orchestrator | Monday 08 September 2025 01:10:18 +0000 (0:00:00.228) 0:01:39.300 ****** 2025-09-08 01:13:21.888835 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.888843 | orchestrator | 2025-09-08 01:13:21.888850 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:21.888857 | orchestrator | Monday 08 September 2025 01:10:23 +0000 (0:00:05.187) 0:01:44.487 ****** 2025-09-08 01:13:21.888864 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:21.888872 | orchestrator | 2025-09-08 01:13:21.888879 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-08 01:13:21.888886 | orchestrator | Monday 08 September 2025 01:10:24 +0000 (0:00:00.975) 0:01:45.462 ****** 2025-09-08 01:13:21.888894 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.888901 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.888917 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.888925 | orchestrator | 2025-09-08 01:13:21.888932 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-08 01:13:21.888939 | orchestrator | Monday 08 September 2025 01:10:29 +0000 (0:00:05.359) 0:01:50.822 ****** 2025-09-08 01:13:21.888953 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.888960 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.888968 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.888975 | orchestrator | 2025-09-08 01:13:21.888982 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-08 01:13:21.888989 | orchestrator | Monday 08 September 2025 01:10:34 +0000 (0:00:04.528) 0:01:55.351 ****** 2025-09-08 01:13:21.888996 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.889003 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.889011 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.889018 | orchestrator | 2025-09-08 01:13:21.889025 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-08 01:13:21.889032 | orchestrator | Monday 08 September 2025 01:10:35 +0000 (0:00:00.856) 0:01:56.208 ****** 2025-09-08 01:13:21.889040 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:21.889047 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.889054 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:21.889061 | orchestrator | 2025-09-08 01:13:21.889069 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-08 01:13:21.889076 | orchestrator | Monday 08 September 2025 01:10:37 +0000 (0:00:02.141) 0:01:58.349 ****** 2025-09-08 01:13:21.889083 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.889090 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.889098 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.889117 | orchestrator | 2025-09-08 01:13:21.889128 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-08 01:13:21.889136 | orchestrator | Monday 08 September 2025 01:10:38 +0000 (0:00:01.487) 0:01:59.837 ****** 2025-09-08 01:13:21.889143 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.889150 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.889158 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.889165 | orchestrator | 2025-09-08 01:13:21.889172 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-08 01:13:21.889179 | orchestrator | Monday 08 September 2025 01:10:39 +0000 (0:00:01.189) 0:02:01.027 ****** 2025-09-08 01:13:21.889187 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.889194 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.889201 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.889208 | orchestrator | 2025-09-08 01:13:21.889215 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-08 01:13:21.889223 | orchestrator | Monday 08 September 2025 01:10:41 +0000 (0:00:02.014) 0:02:03.042 ****** 2025-09-08 01:13:21.889230 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.889237 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.889244 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.889252 | orchestrator | 2025-09-08 01:13:21.889266 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-08 01:13:21.889274 | orchestrator | Monday 08 September 2025 01:10:43 +0000 (0:00:01.715) 0:02:04.758 ****** 2025-09-08 01:13:21.889281 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.889288 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:21.889295 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:21.889302 | orchestrator | 2025-09-08 01:13:21.889310 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-08 01:13:21.889317 | orchestrator | Monday 08 September 2025 01:10:44 +0000 (0:00:00.694) 0:02:05.452 ****** 2025-09-08 01:13:21.889324 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.889331 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:21.889338 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:21.889346 | orchestrator | 2025-09-08 01:13:21.889353 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:21.889360 | orchestrator | Monday 08 September 2025 01:10:48 +0000 (0:00:03.781) 0:02:09.234 ****** 2025-09-08 01:13:21.889368 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:21.889380 | orchestrator | 2025-09-08 01:13:21.889387 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-08 01:13:21.889394 | orchestrator | Monday 08 September 2025 01:10:48 +0000 (0:00:00.730) 0:02:09.965 ****** 2025-09-08 01:13:21.889402 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.889409 | orchestrator | 2025-09-08 01:13:21.889416 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-08 01:13:21.889423 | orchestrator | Monday 08 September 2025 01:10:52 +0000 (0:00:03.458) 0:02:13.423 ****** 2025-09-08 01:13:21.889431 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.889438 | orchestrator | 2025-09-08 01:13:21.889445 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-08 01:13:21.889452 | orchestrator | Monday 08 September 2025 01:10:55 +0000 (0:00:03.246) 0:02:16.669 ****** 2025-09-08 01:13:21.889460 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-08 01:13:21.889467 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-08 01:13:21.889474 | orchestrator | 2025-09-08 01:13:21.889482 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-08 01:13:21.889489 | orchestrator | Monday 08 September 2025 01:11:02 +0000 (0:00:06.802) 0:02:23.472 ****** 2025-09-08 01:13:21.889496 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.889503 | orchestrator | 2025-09-08 01:13:21.889511 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-08 01:13:21.889518 | orchestrator | Monday 08 September 2025 01:11:05 +0000 (0:00:03.608) 0:02:27.080 ****** 2025-09-08 01:13:21.889525 | orchestrator | ok: [testbed-node-0] 2025-09-08 01:13:21.889532 | orchestrator | ok: [testbed-node-1] 2025-09-08 01:13:21.889540 | orchestrator | ok: [testbed-node-2] 2025-09-08 01:13:21.889547 | orchestrator | 2025-09-08 01:13:21.889554 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-08 01:13:21.889561 | orchestrator | Monday 08 September 2025 01:11:06 +0000 (0:00:00.325) 0:02:27.406 ****** 2025-09-08 01:13:21.889572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.889587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.889601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.889614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.889623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.889631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.889639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.889734 | orchestrator | 2025-09-08 01:13:21.889741 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-08 01:13:21.889749 | orchestrator | Monday 08 September 2025 01:11:08 +0000 (0:00:02.497) 0:02:29.903 ****** 2025-09-08 01:13:21.889756 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:21.889764 | orchestrator | 2025-09-08 01:13:21.889775 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-08 01:13:21.889782 | orchestrator | Monday 08 September 2025 01:11:08 +0000 (0:00:00.127) 0:02:30.031 ****** 2025-09-08 01:13:21.889789 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:21.889797 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:21.889804 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:21.889811 | orchestrator | 2025-09-08 01:13:21.889818 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-08 01:13:21.889826 | orchestrator | Monday 08 September 2025 01:11:09 +0000 (0:00:00.497) 0:02:30.528 ****** 2025-09-08 01:13:21.889834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.889842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.889850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.889857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.889878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.889886 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:21.889900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.889908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.889916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.889923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.889931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.889943 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:21.889954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.889967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'containe2025-09-08 01:13:21 | INFO  | Task 29ba669e-33c1-45c7-a3e4-277e2d3b1d39 is in state SUCCESS 2025-09-08 01:13:21.889978 | orchestrator | r_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.889988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.889996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.890011 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:21.890056 | orchestrator | 2025-09-08 01:13:21.890064 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:21.890072 | orchestrator | Monday 08 September 2025 01:11:10 +0000 (0:00:00.652) 0:02:31.181 ****** 2025-09-08 01:13:21.890086 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-08 01:13:21.890093 | orchestrator | 2025-09-08 01:13:21.890113 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-08 01:13:21.890121 | orchestrator | Monday 08 September 2025 01:11:10 +0000 (0:00:00.593) 0:02:31.774 ****** 2025-09-08 01:13:21.890133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.890171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.890184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.890196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890316 | orchestrator | 2025-09-08 01:13:21.890323 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-08 01:13:21.890331 | orchestrator | Monday 08 September 2025 01:11:16 +0000 (0:00:05.552) 0:02:37.327 ****** 2025-09-08 01:13:21.890338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.890346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.890358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.890389 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:21.890397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.890405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.890412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.890444 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:21.890457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.890465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.890472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.890500 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:21.890507 | orchestrator | 2025-09-08 01:13:21.890514 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-08 01:13:21.890522 | orchestrator | Monday 08 September 2025 01:11:16 +0000 (0:00:00.666) 0:02:37.993 ****** 2025-09-08 01:13:21.890535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.890549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.890557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.890585 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:21.890593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.890604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.890616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.890647 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:21.890654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-08 01:13:21.890662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-08 01:13:21.890674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-08 01:13:21.890694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-08 01:13:21.890701 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:21.890709 | orchestrator | 2025-09-08 01:13:21.890716 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-08 01:13:21.890729 | orchestrator | Monday 08 September 2025 01:11:17 +0000 (0:00:00.921) 0:02:38.915 ****** 2025-09-08 01:13:21.890737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.890777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.890789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.890797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.890879 | orchestrator | 2025-09-08 01:13:21.890886 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-08 01:13:21.890893 | orchestrator | Monday 08 September 2025 01:11:23 +0000 (0:00:05.941) 0:02:44.857 ****** 2025-09-08 01:13:21.890901 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-08 01:13:21.890908 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-08 01:13:21.890916 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-08 01:13:21.890923 | orchestrator | 2025-09-08 01:13:21.890930 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-08 01:13:21.890941 | orchestrator | Monday 08 September 2025 01:11:25 +0000 (0:00:01.647) 0:02:46.505 ****** 2025-09-08 01:13:21.890953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.890982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.890989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.891000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.891013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891140 | orchestrator | 2025-09-08 01:13:21.891148 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-08 01:13:21.891155 | orchestrator | Monday 08 September 2025 01:11:41 +0000 (0:00:16.596) 0:03:03.101 ****** 2025-09-08 01:13:21.891162 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.891170 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.891177 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.891185 | orchestrator | 2025-09-08 01:13:21.891192 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-08 01:13:21.891199 | orchestrator | Monday 08 September 2025 01:11:43 +0000 (0:00:01.506) 0:03:04.608 ****** 2025-09-08 01:13:21.891206 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891214 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891221 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891228 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891236 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891243 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891250 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891257 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891265 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891272 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891279 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891286 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891294 | orchestrator | 2025-09-08 01:13:21.891301 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-08 01:13:21.891308 | orchestrator | Monday 08 September 2025 01:11:48 +0000 (0:00:05.408) 0:03:10.017 ****** 2025-09-08 01:13:21.891315 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891322 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891330 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891337 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891344 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891352 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891359 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891371 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891378 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891386 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891393 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891400 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891407 | orchestrator | 2025-09-08 01:13:21.891418 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-08 01:13:21.891426 | orchestrator | Monday 08 September 2025 01:11:54 +0000 (0:00:05.505) 0:03:15.522 ****** 2025-09-08 01:13:21.891433 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891440 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891448 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-08 01:13:21.891455 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891462 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891469 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-08 01:13:21.891477 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891484 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891491 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-08 01:13:21.891498 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891510 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891518 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-08 01:13:21.891525 | orchestrator | 2025-09-08 01:13:21.891533 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-08 01:13:21.891540 | orchestrator | Monday 08 September 2025 01:11:59 +0000 (0:00:05.205) 0:03:20.728 ****** 2025-09-08 01:13:21.891548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.891555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.891568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-08 01:13:21.891579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.891592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.891600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-08 01:13:21.891608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-08 01:13:21.891691 | orchestrator | 2025-09-08 01:13:21.891698 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-08 01:13:21.891705 | orchestrator | Monday 08 September 2025 01:12:03 +0000 (0:00:03.952) 0:03:24.681 ****** 2025-09-08 01:13:21.891712 | orchestrator | skipping: [testbed-node-0] 2025-09-08 01:13:21.891719 | orchestrator | skipping: [testbed-node-1] 2025-09-08 01:13:21.891725 | orchestrator | skipping: [testbed-node-2] 2025-09-08 01:13:21.891732 | orchestrator | 2025-09-08 01:13:21.891739 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-08 01:13:21.891746 | orchestrator | Monday 08 September 2025 01:12:03 +0000 (0:00:00.315) 0:03:24.996 ****** 2025-09-08 01:13:21.891752 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.891759 | orchestrator | 2025-09-08 01:13:21.891766 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-08 01:13:21.891772 | orchestrator | Monday 08 September 2025 01:12:06 +0000 (0:00:02.212) 0:03:27.209 ****** 2025-09-08 01:13:21.891779 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.891786 | orchestrator | 2025-09-08 01:13:21.891792 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-08 01:13:21.891799 | orchestrator | Monday 08 September 2025 01:12:08 +0000 (0:00:02.542) 0:03:29.751 ****** 2025-09-08 01:13:21.891806 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.891812 | orchestrator | 2025-09-08 01:13:21.891819 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-08 01:13:21.891826 | orchestrator | Monday 08 September 2025 01:12:10 +0000 (0:00:02.277) 0:03:32.029 ****** 2025-09-08 01:13:21.891832 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.891839 | orchestrator | 2025-09-08 01:13:21.891846 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-08 01:13:21.891856 | orchestrator | Monday 08 September 2025 01:12:13 +0000 (0:00:02.248) 0:03:34.277 ****** 2025-09-08 01:13:21.891863 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.891869 | orchestrator | 2025-09-08 01:13:21.891876 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-08 01:13:21.891883 | orchestrator | Monday 08 September 2025 01:12:35 +0000 (0:00:22.014) 0:03:56.292 ****** 2025-09-08 01:13:21.891889 | orchestrator | 2025-09-08 01:13:21.891896 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-08 01:13:21.891903 | orchestrator | Monday 08 September 2025 01:12:35 +0000 (0:00:00.089) 0:03:56.381 ****** 2025-09-08 01:13:21.891909 | orchestrator | 2025-09-08 01:13:21.891916 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-08 01:13:21.891923 | orchestrator | Monday 08 September 2025 01:12:35 +0000 (0:00:00.075) 0:03:56.457 ****** 2025-09-08 01:13:21.891929 | orchestrator | 2025-09-08 01:13:21.891936 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-08 01:13:21.891943 | orchestrator | Monday 08 September 2025 01:12:35 +0000 (0:00:00.074) 0:03:56.532 ****** 2025-09-08 01:13:21.891949 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.891956 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.891963 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.891969 | orchestrator | 2025-09-08 01:13:21.891981 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-08 01:13:21.891988 | orchestrator | Monday 08 September 2025 01:12:52 +0000 (0:00:17.191) 0:04:13.724 ****** 2025-09-08 01:13:21.891995 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.892001 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.892008 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.892014 | orchestrator | 2025-09-08 01:13:21.892021 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-08 01:13:21.892032 | orchestrator | Monday 08 September 2025 01:12:59 +0000 (0:00:06.973) 0:04:20.697 ****** 2025-09-08 01:13:21.892039 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.892046 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.892053 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.892059 | orchestrator | 2025-09-08 01:13:21.892066 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-08 01:13:21.892073 | orchestrator | Monday 08 September 2025 01:13:07 +0000 (0:00:08.453) 0:04:29.150 ****** 2025-09-08 01:13:21.892079 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.892086 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.892093 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.892111 | orchestrator | 2025-09-08 01:13:21.892118 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-08 01:13:21.892124 | orchestrator | Monday 08 September 2025 01:13:13 +0000 (0:00:05.302) 0:04:34.453 ****** 2025-09-08 01:13:21.892131 | orchestrator | changed: [testbed-node-0] 2025-09-08 01:13:21.892138 | orchestrator | changed: [testbed-node-1] 2025-09-08 01:13:21.892144 | orchestrator | changed: [testbed-node-2] 2025-09-08 01:13:21.892151 | orchestrator | 2025-09-08 01:13:21.892158 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:13:21.892165 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-08 01:13:21.892172 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:13:21.892179 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-08 01:13:21.892186 | orchestrator | 2025-09-08 01:13:21.892192 | orchestrator | 2025-09-08 01:13:21.892199 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:13:21.892206 | orchestrator | Monday 08 September 2025 01:13:18 +0000 (0:00:05.713) 0:04:40.167 ****** 2025-09-08 01:13:21.892212 | orchestrator | =============================================================================== 2025-09-08 01:13:21.892219 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.01s 2025-09-08 01:13:21.892226 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.19s 2025-09-08 01:13:21.892233 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.60s 2025-09-08 01:13:21.892239 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.36s 2025-09-08 01:13:21.892246 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.98s 2025-09-08 01:13:21.892253 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.83s 2025-09-08 01:13:21.892260 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.45s 2025-09-08 01:13:21.892266 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.96s 2025-09-08 01:13:21.892273 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.49s 2025-09-08 01:13:21.892280 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.97s 2025-09-08 01:13:21.892286 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.80s 2025-09-08 01:13:21.892293 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.53s 2025-09-08 01:13:21.892300 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.94s 2025-09-08 01:13:21.892306 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.71s 2025-09-08 01:13:21.892313 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.57s 2025-09-08 01:13:21.892320 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.55s 2025-09-08 01:13:21.892334 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.51s 2025-09-08 01:13:21.892344 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.41s 2025-09-08 01:13:21.892351 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.36s 2025-09-08 01:13:21.892358 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.30s 2025-09-08 01:13:21.892365 | orchestrator | 2025-09-08 01:13:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:24.928160 | orchestrator | 2025-09-08 01:13:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:27.971629 | orchestrator | 2025-09-08 01:13:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:31.013330 | orchestrator | 2025-09-08 01:13:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:34.049074 | orchestrator | 2025-09-08 01:13:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:37.096569 | orchestrator | 2025-09-08 01:13:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:40.130335 | orchestrator | 2025-09-08 01:13:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:43.171028 | orchestrator | 2025-09-08 01:13:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:46.213340 | orchestrator | 2025-09-08 01:13:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:49.258573 | orchestrator | 2025-09-08 01:13:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:52.294207 | orchestrator | 2025-09-08 01:13:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:55.338350 | orchestrator | 2025-09-08 01:13:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:13:58.384254 | orchestrator | 2025-09-08 01:13:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:01.422382 | orchestrator | 2025-09-08 01:14:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:04.485859 | orchestrator | 2025-09-08 01:14:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:07.535306 | orchestrator | 2025-09-08 01:14:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:10.578892 | orchestrator | 2025-09-08 01:14:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:13.621877 | orchestrator | 2025-09-08 01:14:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:16.672917 | orchestrator | 2025-09-08 01:14:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:19.715996 | orchestrator | 2025-09-08 01:14:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-08 01:14:22.758177 | orchestrator | 2025-09-08 01:14:23.129292 | orchestrator | 2025-09-08 01:14:23.132655 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Sep 8 01:14:23 UTC 2025 2025-09-08 01:14:23.132685 | orchestrator | 2025-09-08 01:14:23.503682 | orchestrator | ok: Runtime: 0:36:07.100647 2025-09-08 01:14:23.779508 | 2025-09-08 01:14:23.779663 | TASK [Bootstrap services] 2025-09-08 01:14:24.538499 | orchestrator | 2025-09-08 01:14:24.538680 | orchestrator | # BOOTSTRAP 2025-09-08 01:14:24.538703 | orchestrator | 2025-09-08 01:14:24.538717 | orchestrator | + set -e 2025-09-08 01:14:24.538730 | orchestrator | + echo 2025-09-08 01:14:24.538744 | orchestrator | + echo '# BOOTSTRAP' 2025-09-08 01:14:24.538762 | orchestrator | + echo 2025-09-08 01:14:24.538806 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-08 01:14:24.548721 | orchestrator | + set -e 2025-09-08 01:14:24.548785 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-08 01:14:28.900567 | orchestrator | 2025-09-08 01:14:28 | INFO  | It takes a moment until task 295248ad-c6c9-4dc9-86f4-7f7c8a3a51f2 (flavor-manager) has been started and output is visible here. 2025-09-08 01:14:36.900048 | orchestrator | 2025-09-08 01:14:32 | INFO  | Flavor SCS-1V-4 created 2025-09-08 01:14:36.900106 | orchestrator | 2025-09-08 01:14:32 | INFO  | Flavor SCS-2V-8 created 2025-09-08 01:14:36.900113 | orchestrator | 2025-09-08 01:14:33 | INFO  | Flavor SCS-4V-16 created 2025-09-08 01:14:36.900117 | orchestrator | 2025-09-08 01:14:33 | INFO  | Flavor SCS-8V-32 created 2025-09-08 01:14:36.900120 | orchestrator | 2025-09-08 01:14:33 | INFO  | Flavor SCS-1V-2 created 2025-09-08 01:14:36.900124 | orchestrator | 2025-09-08 01:14:33 | INFO  | Flavor SCS-2V-4 created 2025-09-08 01:14:36.900127 | orchestrator | 2025-09-08 01:14:33 | INFO  | Flavor SCS-4V-8 created 2025-09-08 01:14:36.900131 | orchestrator | 2025-09-08 01:14:33 | INFO  | Flavor SCS-8V-16 created 2025-09-08 01:14:36.900140 | orchestrator | 2025-09-08 01:14:33 | INFO  | Flavor SCS-16V-32 created 2025-09-08 01:14:36.900144 | orchestrator | 2025-09-08 01:14:34 | INFO  | Flavor SCS-1V-8 created 2025-09-08 01:14:36.900147 | orchestrator | 2025-09-08 01:14:34 | INFO  | Flavor SCS-2V-16 created 2025-09-08 01:14:36.900150 | orchestrator | 2025-09-08 01:14:34 | INFO  | Flavor SCS-4V-32 created 2025-09-08 01:14:36.900162 | orchestrator | 2025-09-08 01:14:34 | INFO  | Flavor SCS-1L-1 created 2025-09-08 01:14:36.900165 | orchestrator | 2025-09-08 01:14:34 | INFO  | Flavor SCS-2V-4-20s created 2025-09-08 01:14:36.900169 | orchestrator | 2025-09-08 01:14:34 | INFO  | Flavor SCS-4V-16-100s created 2025-09-08 01:14:36.900172 | orchestrator | 2025-09-08 01:14:34 | INFO  | Flavor SCS-1V-4-10 created 2025-09-08 01:14:36.900175 | orchestrator | 2025-09-08 01:14:35 | INFO  | Flavor SCS-2V-8-20 created 2025-09-08 01:14:36.900178 | orchestrator | 2025-09-08 01:14:35 | INFO  | Flavor SCS-4V-16-50 created 2025-09-08 01:14:36.900181 | orchestrator | 2025-09-08 01:14:35 | INFO  | Flavor SCS-8V-32-100 created 2025-09-08 01:14:36.900184 | orchestrator | 2025-09-08 01:14:35 | INFO  | Flavor SCS-1V-2-5 created 2025-09-08 01:14:36.900187 | orchestrator | 2025-09-08 01:14:35 | INFO  | Flavor SCS-2V-4-10 created 2025-09-08 01:14:36.900191 | orchestrator | 2025-09-08 01:14:35 | INFO  | Flavor SCS-4V-8-20 created 2025-09-08 01:14:36.900194 | orchestrator | 2025-09-08 01:14:35 | INFO  | Flavor SCS-8V-16-50 created 2025-09-08 01:14:36.900197 | orchestrator | 2025-09-08 01:14:36 | INFO  | Flavor SCS-16V-32-100 created 2025-09-08 01:14:36.900200 | orchestrator | 2025-09-08 01:14:36 | INFO  | Flavor SCS-1V-8-20 created 2025-09-08 01:14:36.900204 | orchestrator | 2025-09-08 01:14:36 | INFO  | Flavor SCS-2V-16-50 created 2025-09-08 01:14:36.900207 | orchestrator | 2025-09-08 01:14:36 | INFO  | Flavor SCS-4V-32-100 created 2025-09-08 01:14:36.900210 | orchestrator | 2025-09-08 01:14:36 | INFO  | Flavor SCS-1L-1-5 created 2025-09-08 01:14:39.041110 | orchestrator | 2025-09-08 01:14:39 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-08 01:14:49.288052 | orchestrator | 2025-09-08 01:14:49 | INFO  | Task 827e7f40-0ee9-46e4-bd17-745d27d3c93f (bootstrap-basic) was prepared for execution. 2025-09-08 01:14:49.288201 | orchestrator | 2025-09-08 01:14:49 | INFO  | It takes a moment until task 827e7f40-0ee9-46e4-bd17-745d27d3c93f (bootstrap-basic) has been started and output is visible here. 2025-09-08 01:15:49.565491 | orchestrator | 2025-09-08 01:15:49.565612 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-08 01:15:49.565629 | orchestrator | 2025-09-08 01:15:49.565642 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-08 01:15:49.565654 | orchestrator | Monday 08 September 2025 01:14:53 +0000 (0:00:00.078) 0:00:00.078 ****** 2025-09-08 01:15:49.565666 | orchestrator | ok: [localhost] 2025-09-08 01:15:49.565678 | orchestrator | 2025-09-08 01:15:49.565689 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-08 01:15:49.565703 | orchestrator | Monday 08 September 2025 01:14:55 +0000 (0:00:01.849) 0:00:01.928 ****** 2025-09-08 01:15:49.565714 | orchestrator | ok: [localhost] 2025-09-08 01:15:49.565725 | orchestrator | 2025-09-08 01:15:49.565736 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-08 01:15:49.565747 | orchestrator | Monday 08 September 2025 01:15:03 +0000 (0:00:08.336) 0:00:10.264 ****** 2025-09-08 01:15:49.565758 | orchestrator | changed: [localhost] 2025-09-08 01:15:49.565770 | orchestrator | 2025-09-08 01:15:49.565781 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-08 01:15:49.565792 | orchestrator | Monday 08 September 2025 01:15:11 +0000 (0:00:07.392) 0:00:17.657 ****** 2025-09-08 01:15:49.565803 | orchestrator | ok: [localhost] 2025-09-08 01:15:49.565814 | orchestrator | 2025-09-08 01:15:49.565826 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-08 01:15:49.565837 | orchestrator | Monday 08 September 2025 01:15:18 +0000 (0:00:07.004) 0:00:24.661 ****** 2025-09-08 01:15:49.565848 | orchestrator | changed: [localhost] 2025-09-08 01:15:49.565862 | orchestrator | 2025-09-08 01:15:49.565874 | orchestrator | TASK [Create public network] *************************************************** 2025-09-08 01:15:49.565885 | orchestrator | Monday 08 September 2025 01:15:26 +0000 (0:00:07.886) 0:00:32.548 ****** 2025-09-08 01:15:49.565896 | orchestrator | changed: [localhost] 2025-09-08 01:15:49.565907 | orchestrator | 2025-09-08 01:15:49.565918 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-08 01:15:49.565929 | orchestrator | Monday 08 September 2025 01:15:31 +0000 (0:00:05.023) 0:00:37.571 ****** 2025-09-08 01:15:49.565941 | orchestrator | changed: [localhost] 2025-09-08 01:15:49.565954 | orchestrator | 2025-09-08 01:15:49.565978 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-08 01:15:49.565992 | orchestrator | Monday 08 September 2025 01:15:37 +0000 (0:00:06.377) 0:00:43.949 ****** 2025-09-08 01:15:49.566005 | orchestrator | changed: [localhost] 2025-09-08 01:15:49.566077 | orchestrator | 2025-09-08 01:15:49.566092 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-08 01:15:49.566105 | orchestrator | Monday 08 September 2025 01:15:41 +0000 (0:00:04.400) 0:00:48.349 ****** 2025-09-08 01:15:49.566118 | orchestrator | changed: [localhost] 2025-09-08 01:15:49.566130 | orchestrator | 2025-09-08 01:15:49.566142 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-08 01:15:49.566155 | orchestrator | Monday 08 September 2025 01:15:45 +0000 (0:00:03.863) 0:00:52.213 ****** 2025-09-08 01:15:49.566167 | orchestrator | ok: [localhost] 2025-09-08 01:15:49.566180 | orchestrator | 2025-09-08 01:15:49.566192 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-08 01:15:49.566230 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-08 01:15:49.566243 | orchestrator | 2025-09-08 01:15:49.566256 | orchestrator | 2025-09-08 01:15:49.566268 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-08 01:15:49.566281 | orchestrator | Monday 08 September 2025 01:15:49 +0000 (0:00:03.560) 0:00:55.773 ****** 2025-09-08 01:15:49.566316 | orchestrator | =============================================================================== 2025-09-08 01:15:49.566328 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.34s 2025-09-08 01:15:49.566339 | orchestrator | Create volume type local ------------------------------------------------ 7.89s 2025-09-08 01:15:49.566350 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.39s 2025-09-08 01:15:49.566361 | orchestrator | Get volume type local --------------------------------------------------- 7.00s 2025-09-08 01:15:49.566371 | orchestrator | Set public network to default ------------------------------------------- 6.38s 2025-09-08 01:15:49.566382 | orchestrator | Create public network --------------------------------------------------- 5.02s 2025-09-08 01:15:49.566393 | orchestrator | Create public subnet ---------------------------------------------------- 4.40s 2025-09-08 01:15:49.566404 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.86s 2025-09-08 01:15:49.566415 | orchestrator | Create manager role ----------------------------------------------------- 3.56s 2025-09-08 01:15:49.566426 | orchestrator | Gathering Facts --------------------------------------------------------- 1.85s 2025-09-08 01:15:51.879615 | orchestrator | 2025-09-08 01:15:51 | INFO  | It takes a moment until task 4786d588-b1fd-4d00-91b1-d420da14b1fe (image-manager) has been started and output is visible here. 2025-09-08 01:17:37.320006 | orchestrator | 2025-09-08 01:15:55 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-08 01:17:37.320130 | orchestrator | 2025-09-08 01:15:55 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-08 01:17:37.320151 | orchestrator | 2025-09-08 01:15:55 | INFO  | Importing image Cirros 0.6.2 2025-09-08 01:17:37.320163 | orchestrator | 2025-09-08 01:15:55 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-08 01:17:37.320175 | orchestrator | 2025-09-08 01:15:57 | INFO  | Waiting for image to leave queued state... 2025-09-08 01:17:37.320187 | orchestrator | 2025-09-08 01:15:59 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320198 | orchestrator | 2025-09-08 01:16:09 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-08 01:17:37.320209 | orchestrator | 2025-09-08 01:16:10 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-08 01:17:37.320220 | orchestrator | 2025-09-08 01:16:10 | INFO  | Setting internal_version = 0.6.2 2025-09-08 01:17:37.320231 | orchestrator | 2025-09-08 01:16:10 | INFO  | Setting image_original_user = cirros 2025-09-08 01:17:37.320243 | orchestrator | 2025-09-08 01:16:10 | INFO  | Adding tag os:cirros 2025-09-08 01:17:37.320254 | orchestrator | 2025-09-08 01:16:10 | INFO  | Setting property architecture: x86_64 2025-09-08 01:17:37.320265 | orchestrator | 2025-09-08 01:16:10 | INFO  | Setting property hw_disk_bus: scsi 2025-09-08 01:17:37.320324 | orchestrator | 2025-09-08 01:16:10 | INFO  | Setting property hw_rng_model: virtio 2025-09-08 01:17:37.320336 | orchestrator | 2025-09-08 01:16:11 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-08 01:17:37.320347 | orchestrator | 2025-09-08 01:16:11 | INFO  | Setting property hw_watchdog_action: reset 2025-09-08 01:17:37.320358 | orchestrator | 2025-09-08 01:16:11 | INFO  | Setting property hypervisor_type: qemu 2025-09-08 01:17:37.320368 | orchestrator | 2025-09-08 01:16:11 | INFO  | Setting property os_distro: cirros 2025-09-08 01:17:37.320379 | orchestrator | 2025-09-08 01:16:11 | INFO  | Setting property replace_frequency: never 2025-09-08 01:17:37.320390 | orchestrator | 2025-09-08 01:16:12 | INFO  | Setting property uuid_validity: none 2025-09-08 01:17:37.320401 | orchestrator | 2025-09-08 01:16:12 | INFO  | Setting property provided_until: none 2025-09-08 01:17:37.320437 | orchestrator | 2025-09-08 01:16:12 | INFO  | Setting property image_description: Cirros 2025-09-08 01:17:37.320457 | orchestrator | 2025-09-08 01:16:12 | INFO  | Setting property image_name: Cirros 2025-09-08 01:17:37.320468 | orchestrator | 2025-09-08 01:16:12 | INFO  | Setting property internal_version: 0.6.2 2025-09-08 01:17:37.320485 | orchestrator | 2025-09-08 01:16:13 | INFO  | Setting property image_original_user: cirros 2025-09-08 01:17:37.320496 | orchestrator | 2025-09-08 01:16:13 | INFO  | Setting property os_version: 0.6.2 2025-09-08 01:17:37.320509 | orchestrator | 2025-09-08 01:16:13 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-08 01:17:37.320523 | orchestrator | 2025-09-08 01:16:13 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-08 01:17:37.320536 | orchestrator | 2025-09-08 01:16:13 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-08 01:17:37.320549 | orchestrator | 2025-09-08 01:16:13 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-08 01:17:37.320561 | orchestrator | 2025-09-08 01:16:13 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-08 01:17:37.320574 | orchestrator | 2025-09-08 01:16:14 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-08 01:17:37.320586 | orchestrator | 2025-09-08 01:16:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-08 01:17:37.320599 | orchestrator | 2025-09-08 01:16:14 | INFO  | Importing image Cirros 0.6.3 2025-09-08 01:17:37.320612 | orchestrator | 2025-09-08 01:16:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-08 01:17:37.320625 | orchestrator | 2025-09-08 01:16:15 | INFO  | Waiting for image to leave queued state... 2025-09-08 01:17:37.320638 | orchestrator | 2025-09-08 01:16:17 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320651 | orchestrator | 2025-09-08 01:16:27 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320680 | orchestrator | 2025-09-08 01:16:37 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320694 | orchestrator | 2025-09-08 01:16:48 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320706 | orchestrator | 2025-09-08 01:16:58 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320719 | orchestrator | 2025-09-08 01:17:08 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320731 | orchestrator | 2025-09-08 01:17:18 | INFO  | Waiting for import to complete... 2025-09-08 01:17:37.320744 | orchestrator | 2025-09-08 01:17:28 | INFO  | Waiting for image to leave queued state... 2025-09-08 01:17:37.320756 | orchestrator | 2025-09-08 01:17:30 | INFO  | Waiting for image to leave queued state... 2025-09-08 01:17:37.320769 | orchestrator | 2025-09-08 01:17:32 | INFO  | Waiting for image to leave queued state... 2025-09-08 01:17:37.320783 | orchestrator | 2025-09-08 01:17:34 | INFO  | Waiting for image to leave queued state... 2025-09-08 01:17:37.320796 | orchestrator | 2025-09-08 01:17:36 | ERROR  | Image Cirros 0.6.3 seems stuck in queued state 2025-09-08 01:17:37.320809 | orchestrator | 2025-09-08 01:17:36 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-08 01:17:37.320822 | orchestrator | โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ 2025-09-08 01:17:37.320835 | orchestrator | โ”‚ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:126 โ”‚ 2025-09-08 01:17:37.320857 | orchestrator | โ”‚ in create_cli_args โ”‚ 2025-09-08 01:17:37.320868 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.320878 | orchestrator | โ”‚ 123 โ”‚ โ”‚ logger.add(sys.stderr, format=log_fmt, level=level, colorize= โ”‚ 2025-09-08 01:17:37.320889 | orchestrator | โ”‚ 124 โ”‚ โ”‚ โ”‚ 2025-09-08 01:17:37.320900 | orchestrator | โ”‚ 125 โ”‚ โ”‚ if __name__ == "__main__" or __name__ == "openstack_image_man โ”‚ 2025-09-08 01:17:37.320910 | orchestrator | โ”‚ โฑ 126 โ”‚ โ”‚ โ”‚ self.main() โ”‚ 2025-09-08 01:17:37.320921 | orchestrator | โ”‚ 127 โ”‚ โ”‚ 2025-09-08 01:17:37.320932 | orchestrator | โ”‚ 128 โ”‚ def read_image_files(self, return_all_images=False) -> list: โ”‚ 2025-09-08 01:17:37.320943 | orchestrator | โ”‚ 129 โ”‚ โ”‚ """Read all YAML files in self.CONF.images""" โ”‚ 2025-09-08 01:17:37.320953 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.320970 | orchestrator | โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ 2025-09-08 01:17:37.320995 | orchestrator | โ”‚ โ”‚ check = True โ”‚ โ”‚ 2025-09-08 01:17:37.321006 | orchestrator | โ”‚ โ”‚ check_age = False โ”‚ โ”‚ 2025-09-08 01:17:37.321017 | orchestrator | โ”‚ โ”‚ check_only = False โ”‚ โ”‚ 2025-09-08 01:17:37.321027 | orchestrator | โ”‚ โ”‚ cloud = 'admin' โ”‚ โ”‚ 2025-09-08 01:17:37.321038 | orchestrator | โ”‚ โ”‚ deactivate = False โ”‚ โ”‚ 2025-09-08 01:17:37.321049 | orchestrator | โ”‚ โ”‚ debug = False โ”‚ โ”‚ 2025-09-08 01:17:37.321060 | orchestrator | โ”‚ โ”‚ delete = False โ”‚ โ”‚ 2025-09-08 01:17:37.321071 | orchestrator | โ”‚ โ”‚ dry_run = False โ”‚ โ”‚ 2025-09-08 01:17:37.321081 | orchestrator | โ”‚ โ”‚ filter = 'Cirros' โ”‚ โ”‚ 2025-09-08 01:17:37.321092 | orchestrator | โ”‚ โ”‚ force = False โ”‚ โ”‚ 2025-09-08 01:17:37.321103 | orchestrator | โ”‚ โ”‚ hide = True โ”‚ โ”‚ 2025-09-08 01:17:37.321114 | orchestrator | โ”‚ โ”‚ images = '/etc/images' โ”‚ โ”‚ 2025-09-08 01:17:37.321125 | orchestrator | โ”‚ โ”‚ keep = False โ”‚ โ”‚ 2025-09-08 01:17:37.321136 | orchestrator | โ”‚ โ”‚ latest = False โ”‚ โ”‚ 2025-09-08 01:17:37.321146 | orchestrator | โ”‚ โ”‚ level = 'INFO' โ”‚ โ”‚ 2025-09-08 01:17:37.321157 | orchestrator | โ”‚ โ”‚ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} โ”‚ โ”‚ 2025-09-08 01:17:37.321168 | orchestrator | โ”‚ โ”‚ | {level: <8} | '+17 โ”‚ โ”‚ 2025-09-08 01:17:37.321178 | orchestrator | โ”‚ โ”‚ max_age = 90 โ”‚ โ”‚ 2025-09-08 01:17:37.321190 | orchestrator | โ”‚ โ”‚ self = โ”‚ โ”‚ 2025-09-08 01:17:37.356011 | orchestrator | โ”‚ โ”‚ share_action = 'add' โ”‚ โ”‚ 2025-09-08 01:17:37.356059 | orchestrator | โ”‚ โ”‚ share_domain = 'default' โ”‚ โ”‚ 2025-09-08 01:17:37.356083 | orchestrator | โ”‚ โ”‚ share_image = None โ”‚ โ”‚ 2025-09-08 01:17:37.356094 | orchestrator | โ”‚ โ”‚ share_target = None โ”‚ โ”‚ 2025-09-08 01:17:37.356105 | orchestrator | โ”‚ โ”‚ share_type = 'project' โ”‚ โ”‚ 2025-09-08 01:17:37.356116 | orchestrator | โ”‚ โ”‚ tag = 'managed_by_osism' โ”‚ โ”‚ 2025-09-08 01:17:37.356126 | orchestrator | โ”‚ โ”‚ use_os_hidden = False โ”‚ โ”‚ 2025-09-08 01:17:37.356137 | orchestrator | โ”‚ โ”‚ yes_i_really_know_what_i_do = False โ”‚ โ”‚ 2025-09-08 01:17:37.356149 | orchestrator | โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ 2025-09-08 01:17:37.356162 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.356173 | orchestrator | โ”‚ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:253 โ”‚ 2025-09-08 01:17:37.356184 | orchestrator | โ”‚ in main โ”‚ 2025-09-08 01:17:37.356195 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.356206 | orchestrator | โ”‚ 250 โ”‚ โ”‚ else: โ”‚ 2025-09-08 01:17:37.356216 | orchestrator | โ”‚ 251 โ”‚ โ”‚ โ”‚ self.create_connection() โ”‚ 2025-09-08 01:17:37.356227 | orchestrator | โ”‚ 252 โ”‚ โ”‚ โ”‚ images = self.read_image_files() โ”‚ 2025-09-08 01:17:37.356238 | orchestrator | โ”‚ โฑ 253 โ”‚ โ”‚ โ”‚ managed_images = self.process_images(images) โ”‚ 2025-09-08 01:17:37.356249 | orchestrator | โ”‚ 254 โ”‚ โ”‚ โ”‚ โ”‚ 2025-09-08 01:17:37.356266 | orchestrator | โ”‚ 255 โ”‚ โ”‚ โ”‚ # ignore all non-specified images when using --filter โ”‚ 2025-09-08 01:17:37.356308 | orchestrator | โ”‚ 256 โ”‚ โ”‚ โ”‚ if self.CONF.filter: โ”‚ 2025-09-08 01:17:37.356320 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.356331 | orchestrator | โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ 2025-09-08 01:17:37.356344 | orchestrator | โ”‚ โ”‚ images = [ โ”‚ โ”‚ 2025-09-08 01:17:37.356355 | orchestrator | โ”‚ โ”‚ โ”‚ { โ”‚ โ”‚ 2025-09-08 01:17:37.356366 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'name': 'Cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.356376 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'enable': True, โ”‚ โ”‚ 2025-09-08 01:17:37.356387 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'format': 'qcow2', โ”‚ โ”‚ 2025-09-08 01:17:37.356398 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'login': 'cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.356409 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'password': 'gocubsgo', โ”‚ โ”‚ 2025-09-08 01:17:37.356420 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'min_disk': 1, โ”‚ โ”‚ 2025-09-08 01:17:37.356431 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'min_ram': 32, โ”‚ โ”‚ 2025-09-08 01:17:37.356442 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'status': 'active', โ”‚ โ”‚ 2025-09-08 01:17:37.356452 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'visibility': 'public', โ”‚ โ”‚ 2025-09-08 01:17:37.356463 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'multi': False, โ”‚ โ”‚ 2025-09-08 01:17:37.356481 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ ... +3 โ”‚ โ”‚ 2025-09-08 01:17:37.356492 | orchestrator | โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.356503 | orchestrator | โ”‚ โ”‚ ] โ”‚ โ”‚ 2025-09-08 01:17:37.356514 | orchestrator | โ”‚ โ”‚ self = โ”‚ โ”‚ 2025-09-08 01:17:37.356536 | orchestrator | โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ 2025-09-08 01:17:37.356558 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.356569 | orchestrator | โ”‚ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:370 โ”‚ 2025-09-08 01:17:37.356581 | orchestrator | โ”‚ in process_images โ”‚ 2025-09-08 01:17:37.356594 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.356606 | orchestrator | โ”‚ 367 โ”‚ โ”‚ โ”‚ if "image_name" not in image["meta"]: โ”‚ 2025-09-08 01:17:37.356619 | orchestrator | โ”‚ 368 โ”‚ โ”‚ โ”‚ โ”‚ image["meta"]["image_name"] = image["name"] โ”‚ 2025-09-08 01:17:37.356631 | orchestrator | โ”‚ 369 โ”‚ โ”‚ โ”‚ โ”‚ 2025-09-08 01:17:37.356645 | orchestrator | โ”‚ โฑ 370 โ”‚ โ”‚ โ”‚ existing_images, imported_image, previous_image = self.pr โ”‚ 2025-09-08 01:17:37.356657 | orchestrator | โ”‚ 371 โ”‚ โ”‚ โ”‚ โ”‚ image, versions, sorted_versions, image["meta"].copy( โ”‚ 2025-09-08 01:17:37.356670 | orchestrator | โ”‚ 372 โ”‚ โ”‚ โ”‚ ) โ”‚ 2025-09-08 01:17:37.356682 | orchestrator | โ”‚ 373 โ”‚ โ”‚ โ”‚ managed_images = managed_images.union(existing_images) โ”‚ 2025-09-08 01:17:37.356695 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.356708 | orchestrator | โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ 2025-09-08 01:17:37.356721 | orchestrator | โ”‚ โ”‚ image = { โ”‚ โ”‚ 2025-09-08 01:17:37.356734 | orchestrator | โ”‚ โ”‚ โ”‚ 'name': 'Cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.356746 | orchestrator | โ”‚ โ”‚ โ”‚ 'enable': True, โ”‚ โ”‚ 2025-09-08 01:17:37.356759 | orchestrator | โ”‚ โ”‚ โ”‚ 'format': 'qcow2', โ”‚ โ”‚ 2025-09-08 01:17:37.356771 | orchestrator | โ”‚ โ”‚ โ”‚ 'login': 'cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.356784 | orchestrator | โ”‚ โ”‚ โ”‚ 'password': 'gocubsgo', โ”‚ โ”‚ 2025-09-08 01:17:37.356796 | orchestrator | โ”‚ โ”‚ โ”‚ 'min_disk': 1, โ”‚ โ”‚ 2025-09-08 01:17:37.356809 | orchestrator | โ”‚ โ”‚ โ”‚ 'min_ram': 32, โ”‚ โ”‚ 2025-09-08 01:17:37.356822 | orchestrator | โ”‚ โ”‚ โ”‚ 'status': 'active', โ”‚ โ”‚ 2025-09-08 01:17:37.356834 | orchestrator | โ”‚ โ”‚ โ”‚ 'visibility': 'public', โ”‚ โ”‚ 2025-09-08 01:17:37.356846 | orchestrator | โ”‚ โ”‚ โ”‚ 'multi': False, โ”‚ โ”‚ 2025-09-08 01:17:37.356862 | orchestrator | โ”‚ โ”‚ โ”‚ ... +3 โ”‚ โ”‚ 2025-09-08 01:17:37.356873 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.356884 | orchestrator | โ”‚ โ”‚ images = [ โ”‚ โ”‚ 2025-09-08 01:17:37.356901 | orchestrator | โ”‚ โ”‚ โ”‚ { โ”‚ โ”‚ 2025-09-08 01:17:37.356912 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'name': 'Cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.356926 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'enable': True, โ”‚ โ”‚ 2025-09-08 01:17:37.356937 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'format': 'qcow2', โ”‚ โ”‚ 2025-09-08 01:17:37.356948 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'login': 'cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.356959 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'password': 'gocubsgo', โ”‚ โ”‚ 2025-09-08 01:17:37.356969 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'min_disk': 1, โ”‚ โ”‚ 2025-09-08 01:17:37.356980 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'min_ram': 32, โ”‚ โ”‚ 2025-09-08 01:17:37.356991 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'status': 'active', โ”‚ โ”‚ 2025-09-08 01:17:37.357002 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'visibility': 'public', โ”‚ โ”‚ 2025-09-08 01:17:37.357013 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'multi': False, โ”‚ โ”‚ 2025-09-08 01:17:37.357024 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ ... +3 โ”‚ โ”‚ 2025-09-08 01:17:37.357035 | orchestrator | โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.357045 | orchestrator | โ”‚ โ”‚ ] โ”‚ โ”‚ 2025-09-08 01:17:37.357056 | orchestrator | โ”‚ โ”‚ managed_images = set() โ”‚ โ”‚ 2025-09-08 01:17:37.357067 | orchestrator | โ”‚ โ”‚ required_key = 'visibility' โ”‚ โ”‚ 2025-09-08 01:17:37.357077 | orchestrator | โ”‚ โ”‚ REQUIRED_KEYS = [ โ”‚ โ”‚ 2025-09-08 01:17:37.357093 | orchestrator | โ”‚ โ”‚ โ”‚ 'format', โ”‚ โ”‚ 2025-09-08 01:17:37.386576 | orchestrator | โ”‚ โ”‚ โ”‚ 'name', โ”‚ โ”‚ 2025-09-08 01:17:37.386611 | orchestrator | โ”‚ โ”‚ โ”‚ 'login', โ”‚ โ”‚ 2025-09-08 01:17:37.386622 | orchestrator | โ”‚ โ”‚ โ”‚ 'status', โ”‚ โ”‚ 2025-09-08 01:17:37.386633 | orchestrator | โ”‚ โ”‚ โ”‚ 'versions', โ”‚ โ”‚ 2025-09-08 01:17:37.386644 | orchestrator | โ”‚ โ”‚ โ”‚ 'visibility' โ”‚ โ”‚ 2025-09-08 01:17:37.386655 | orchestrator | โ”‚ โ”‚ ] โ”‚ โ”‚ 2025-09-08 01:17:37.386666 | orchestrator | โ”‚ โ”‚ self = โ”‚ โ”‚ 2025-09-08 01:17:37.386687 | orchestrator | โ”‚ โ”‚ sorted_versions = ['0.6.2', '0.6.3'] โ”‚ โ”‚ 2025-09-08 01:17:37.386708 | orchestrator | โ”‚ โ”‚ url = 'https://github.com/cirros-dev/cirros/releases/downloโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.386719 | orchestrator | โ”‚ โ”‚ version = { โ”‚ โ”‚ 2025-09-08 01:17:37.386730 | orchestrator | โ”‚ โ”‚ โ”‚ 'version': '0.6.3', โ”‚ โ”‚ 2025-09-08 01:17:37.386741 | orchestrator | โ”‚ โ”‚ โ”‚ 'url': โ”‚ โ”‚ 2025-09-08 01:17:37.386751 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downloโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.386762 | orchestrator | โ”‚ โ”‚ โ”‚ 'checksum': โ”‚ โ”‚ 2025-09-08 01:17:37.386772 | orchestrator | โ”‚ โ”‚ 'sha256:7d6355852aeb6dbcd191bcda7cd74f1536cfe5cbf8a10โ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.386795 | orchestrator | โ”‚ โ”‚ โ”‚ 'build_date': datetime.date(2024, 9, 26) โ”‚ โ”‚ 2025-09-08 01:17:37.386805 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.386816 | orchestrator | โ”‚ โ”‚ versions = { โ”‚ โ”‚ 2025-09-08 01:17:37.386827 | orchestrator | โ”‚ โ”‚ โ”‚ '0.6.2': { โ”‚ โ”‚ 2025-09-08 01:17:37.386837 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'url': โ”‚ โ”‚ 2025-09-08 01:17:37.386848 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downloโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.386859 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'meta': { โ”‚ โ”‚ 2025-09-08 01:17:37.386870 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_source': โ”‚ โ”‚ 2025-09-08 01:17:37.386880 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downloโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.386891 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_build_date': '2023-05-30' โ”‚ โ”‚ 2025-09-08 01:17:37.386902 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.386912 | orchestrator | โ”‚ โ”‚ โ”‚ }, โ”‚ โ”‚ 2025-09-08 01:17:37.386923 | orchestrator | โ”‚ โ”‚ โ”‚ '0.6.3': { โ”‚ โ”‚ 2025-09-08 01:17:37.386934 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'url': โ”‚ โ”‚ 2025-09-08 01:17:37.386944 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downloโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.386955 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'meta': { โ”‚ โ”‚ 2025-09-08 01:17:37.386966 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_source': โ”‚ โ”‚ 2025-09-08 01:17:37.386976 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downloโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.386987 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_build_date': '2024-09-26' โ”‚ โ”‚ 2025-09-08 01:17:37.386998 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.387008 | orchestrator | โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.387019 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.387030 | orchestrator | โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ 2025-09-08 01:17:37.387042 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.387053 | orchestrator | โ”‚ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:669 โ”‚ 2025-09-08 01:17:37.387069 | orchestrator | โ”‚ in process_image โ”‚ 2025-09-08 01:17:37.387080 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.387099 | orchestrator | โ”‚ 666 โ”‚ โ”‚ โ”‚ โ”‚ existing_images.add(name) โ”‚ 2025-09-08 01:17:37.387111 | orchestrator | โ”‚ 667 โ”‚ โ”‚ โ”‚ โ”‚ 2025-09-08 01:17:37.387121 | orchestrator | โ”‚ 668 โ”‚ โ”‚ โ”‚ if imported_image: โ”‚ 2025-09-08 01:17:37.387132 | orchestrator | โ”‚ โฑ 669 โ”‚ โ”‚ โ”‚ โ”‚ self.set_properties( โ”‚ 2025-09-08 01:17:37.387143 | orchestrator | โ”‚ 670 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ image.copy(), name, versions, version, upstream_c โ”‚ 2025-09-08 01:17:37.387154 | orchestrator | โ”‚ 671 โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ 2025-09-08 01:17:37.387170 | orchestrator | โ”‚ 672 โ”‚ โ”‚ return existing_images, imported_image, previous_image โ”‚ 2025-09-08 01:17:37.387181 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.387193 | orchestrator | โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ 2025-09-08 01:17:37.387206 | orchestrator | โ”‚ โ”‚ cloud_images = { โ”‚ โ”‚ 2025-09-08 01:17:37.387217 | orchestrator | โ”‚ โ”‚ โ”‚ 'Cirros 0.6.2': โ”‚ โ”‚ 2025-09-08 01:17:37.387227 | orchestrator | โ”‚ โ”‚ openstack.image.v2.image.Image(name=Cirros 0.6.2, โ”‚ โ”‚ 2025-09-08 01:17:37.387238 | orchestrator | โ”‚ โ”‚ disk_format=raw, container_format=bare, โ”‚ โ”‚ 2025-09-08 01:17:37.387248 | orchestrator | โ”‚ โ”‚ visibility=private, size=117440512, โ”‚ โ”‚ 2025-09-08 01:17:37.387259 | orchestrator | โ”‚ โ”‚ virtual_size=117440512, status=active, โ”‚ โ”‚ 2025-09-08 01:17:37.387292 | orchestrator | โ”‚ โ”‚ checksum=4245576e3df99ea1211871b8b9514d3b, โ”‚ โ”‚ 2025-09-08 01:17:37.387304 | orchestrator | โ”‚ โ”‚ protected=False, min_ram=32, min_disk=1, โ”‚ โ”‚ 2025-09-08 01:17:37.387315 | orchestrator | โ”‚ โ”‚ owner=48d4d40357694204a4b0be96199666b9, โ”‚ โ”‚ 2025-09-08 01:17:37.387326 | orchestrator | โ”‚ โ”‚ os_hidden=False, os_hash_algo=sha512, โ”‚ โ”‚ 2025-09-08 01:17:37.387336 | orchestrator | โ”‚ โ”‚ os_hash_value=dbb480bdc4f13ead7e00b62766df2815ddc8dโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.387347 | orchestrator | โ”‚ โ”‚ id=687b7344-20ac-4df3-b397-4589c81ec2df, โ”‚ โ”‚ 2025-09-08 01:17:37.387358 | orchestrator | โ”‚ โ”‚ created_at=2025-09-08T01:15:56Z, โ”‚ โ”‚ 2025-09-08 01:17:37.387368 | orchestrator | โ”‚ โ”‚ updated_at=2025-09-08T01:16:03Z, โ”‚ โ”‚ 2025-09-08 01:17:37.387379 | orchestrator | โ”‚ โ”‚ tags=['managed_by_osism'], โ”‚ โ”‚ 2025-09-08 01:17:37.387390 | orchestrator | โ”‚ โ”‚ file=/v2/images/687b7344-20ac-4df3-b397-4589c81ec2dโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.387401 | orchestrator | โ”‚ โ”‚ schema=/v2/schemas/image, โ”‚ โ”‚ 2025-09-08 01:17:37.387411 | orchestrator | โ”‚ โ”‚ properties={'owner_specified.openstack.md5': '', โ”‚ โ”‚ 2025-09-08 01:17:37.387422 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.sha256': '', โ”‚ โ”‚ 2025-09-08 01:17:37.387433 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.object': 'images/Cirros โ”‚ โ”‚ 2025-09-08 01:17:37.387444 | orchestrator | โ”‚ โ”‚ 0.6.2', 'os_glance_importing_to_stores': '', โ”‚ โ”‚ 2025-09-08 01:17:37.387455 | orchestrator | โ”‚ โ”‚ 'os_glance_failed_import': '', 'stores': 'rbd'}, โ”‚ โ”‚ 2025-09-08 01:17:37.387466 | orchestrator | โ”‚ โ”‚ location=Munch({'cloud': 'envvars', 'region_name': โ”‚ โ”‚ 2025-09-08 01:17:37.387477 | orchestrator | โ”‚ โ”‚ '', 'zone': None, 'project': Munch({'id': โ”‚ โ”‚ 2025-09-08 01:17:37.387488 | orchestrator | โ”‚ โ”‚ '48d4d40357694204a4b0be96199666b9', 'name': 'admin', โ”‚ โ”‚ 2025-09-08 01:17:37.387498 | orchestrator | โ”‚ โ”‚ 'domain_id': None, 'domain_name': 'Default'})})) โ”‚ โ”‚ 2025-09-08 01:17:37.387514 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.387525 | orchestrator | โ”‚ โ”‚ existence = False โ”‚ โ”‚ 2025-09-08 01:17:37.387536 | orchestrator | โ”‚ โ”‚ existing_images = {'Cirros 0.6.2', 'Cirros 0.6.3'} โ”‚ โ”‚ 2025-09-08 01:17:37.387546 | orchestrator | โ”‚ โ”‚ image = { โ”‚ โ”‚ 2025-09-08 01:17:37.387565 | orchestrator | โ”‚ โ”‚ โ”‚ 'name': 'Cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.387575 | orchestrator | โ”‚ โ”‚ โ”‚ 'enable': True, โ”‚ โ”‚ 2025-09-08 01:17:37.387586 | orchestrator | โ”‚ โ”‚ โ”‚ 'format': 'qcow2', โ”‚ โ”‚ 2025-09-08 01:17:37.387597 | orchestrator | โ”‚ โ”‚ โ”‚ 'login': 'cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.387608 | orchestrator | โ”‚ โ”‚ โ”‚ 'password': 'gocubsgo', โ”‚ โ”‚ 2025-09-08 01:17:37.387624 | orchestrator | โ”‚ โ”‚ โ”‚ 'min_disk': 1, โ”‚ โ”‚ 2025-09-08 01:17:37.415000 | orchestrator | โ”‚ โ”‚ โ”‚ 'min_ram': 32, โ”‚ โ”‚ 2025-09-08 01:17:37.415026 | orchestrator | โ”‚ โ”‚ โ”‚ 'status': 'active', โ”‚ โ”‚ 2025-09-08 01:17:37.415038 | orchestrator | โ”‚ โ”‚ โ”‚ 'visibility': 'public', โ”‚ โ”‚ 2025-09-08 01:17:37.415048 | orchestrator | โ”‚ โ”‚ โ”‚ 'multi': False, โ”‚ โ”‚ 2025-09-08 01:17:37.415059 | orchestrator | โ”‚ โ”‚ โ”‚ ... +3 โ”‚ โ”‚ 2025-09-08 01:17:37.415070 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.415081 | orchestrator | โ”‚ โ”‚ image_name = 'Cirros' โ”‚ โ”‚ 2025-09-08 01:17:37.415091 | orchestrator | โ”‚ โ”‚ import_result = None โ”‚ โ”‚ 2025-09-08 01:17:37.415102 | orchestrator | โ”‚ โ”‚ imported_image = openstack.image.v2.image.Image(name=Cirros 0.6.2, โ”‚ โ”‚ 2025-09-08 01:17:37.415113 | orchestrator | โ”‚ โ”‚ disk_format=raw, container_format=bare, โ”‚ โ”‚ 2025-09-08 01:17:37.415124 | orchestrator | โ”‚ โ”‚ visibility=private, size=117440512, โ”‚ โ”‚ 2025-09-08 01:17:37.415135 | orchestrator | โ”‚ โ”‚ virtual_size=117440512, status=active, โ”‚ โ”‚ 2025-09-08 01:17:37.415145 | orchestrator | โ”‚ โ”‚ checksum=4245576e3df99ea1211871b8b9514d3b, โ”‚ โ”‚ 2025-09-08 01:17:37.415156 | orchestrator | โ”‚ โ”‚ protected=False, min_ram=32, min_disk=1, โ”‚ โ”‚ 2025-09-08 01:17:37.415167 | orchestrator | โ”‚ โ”‚ owner=48d4d40357694204a4b0be96199666b9, โ”‚ โ”‚ 2025-09-08 01:17:37.415178 | orchestrator | โ”‚ โ”‚ os_hidden=False, os_hash_algo=sha512, โ”‚ โ”‚ 2025-09-08 01:17:37.415188 | orchestrator | โ”‚ โ”‚ os_hash_value=dbb480bdc4f13ead7e00b62766df2815ddc8dโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415199 | orchestrator | โ”‚ โ”‚ id=687b7344-20ac-4df3-b397-4589c81ec2df, โ”‚ โ”‚ 2025-09-08 01:17:37.415210 | orchestrator | โ”‚ โ”‚ created_at=2025-09-08T01:15:56Z, โ”‚ โ”‚ 2025-09-08 01:17:37.415220 | orchestrator | โ”‚ โ”‚ updated_at=2025-09-08T01:16:03Z, โ”‚ โ”‚ 2025-09-08 01:17:37.415231 | orchestrator | โ”‚ โ”‚ tags=['managed_by_osism'], โ”‚ โ”‚ 2025-09-08 01:17:37.415242 | orchestrator | โ”‚ โ”‚ file=/v2/images/687b7344-20ac-4df3-b397-4589c81ec2dโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415253 | orchestrator | โ”‚ โ”‚ schema=/v2/schemas/image, โ”‚ โ”‚ 2025-09-08 01:17:37.415263 | orchestrator | โ”‚ โ”‚ properties={'owner_specified.openstack.md5': '', โ”‚ โ”‚ 2025-09-08 01:17:37.415301 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.sha256': '', โ”‚ โ”‚ 2025-09-08 01:17:37.415313 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.object': 'images/Cirros โ”‚ โ”‚ 2025-09-08 01:17:37.415323 | orchestrator | โ”‚ โ”‚ 0.6.2', 'os_glance_importing_to_stores': '', โ”‚ โ”‚ 2025-09-08 01:17:37.415334 | orchestrator | โ”‚ โ”‚ 'os_glance_failed_import': '', 'stores': 'rbd'}, โ”‚ โ”‚ 2025-09-08 01:17:37.415345 | orchestrator | โ”‚ โ”‚ location=Munch({'cloud': 'envvars', 'region_name': โ”‚ โ”‚ 2025-09-08 01:17:37.415366 | orchestrator | โ”‚ โ”‚ '', 'zone': None, 'project': Munch({'id': โ”‚ โ”‚ 2025-09-08 01:17:37.415377 | orchestrator | โ”‚ โ”‚ '48d4d40357694204a4b0be96199666b9', 'name': 'admin', โ”‚ โ”‚ 2025-09-08 01:17:37.415388 | orchestrator | โ”‚ โ”‚ 'domain_id': None, 'domain_name': 'Default'})})) โ”‚ โ”‚ 2025-09-08 01:17:37.415399 | orchestrator | โ”‚ โ”‚ meta = { โ”‚ โ”‚ 2025-09-08 01:17:37.415417 | orchestrator | โ”‚ โ”‚ โ”‚ 'architecture': 'x86_64', โ”‚ โ”‚ 2025-09-08 01:17:37.415429 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_disk_bus': 'scsi', โ”‚ โ”‚ 2025-09-08 01:17:37.415440 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_rng_model': 'virtio', โ”‚ โ”‚ 2025-09-08 01:17:37.415450 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_scsi_model': 'virtio-scsi', โ”‚ โ”‚ 2025-09-08 01:17:37.415461 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_watchdog_action': 'reset', โ”‚ โ”‚ 2025-09-08 01:17:37.415472 | orchestrator | โ”‚ โ”‚ โ”‚ 'hypervisor_type': 'qemu', โ”‚ โ”‚ 2025-09-08 01:17:37.415483 | orchestrator | โ”‚ โ”‚ โ”‚ 'os_distro': 'cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.415494 | orchestrator | โ”‚ โ”‚ โ”‚ 'replace_frequency': 'never', โ”‚ โ”‚ 2025-09-08 01:17:37.415505 | orchestrator | โ”‚ โ”‚ โ”‚ 'uuid_validity': 'none', โ”‚ โ”‚ 2025-09-08 01:17:37.415516 | orchestrator | โ”‚ โ”‚ โ”‚ 'provided_until': 'none', โ”‚ โ”‚ 2025-09-08 01:17:37.415527 | orchestrator | โ”‚ โ”‚ โ”‚ ... +2 โ”‚ โ”‚ 2025-09-08 01:17:37.415538 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.415556 | orchestrator | โ”‚ โ”‚ name = 'Cirros 0.6.3' โ”‚ โ”‚ 2025-09-08 01:17:37.415567 | orchestrator | โ”‚ โ”‚ parsed_url = ParseResult( โ”‚ โ”‚ 2025-09-08 01:17:37.415578 | orchestrator | โ”‚ โ”‚ โ”‚ scheme='https', โ”‚ โ”‚ 2025-09-08 01:17:37.415589 | orchestrator | โ”‚ โ”‚ โ”‚ netloc='github.com', โ”‚ โ”‚ 2025-09-08 01:17:37.415600 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 2025-09-08 01:17:37.415611 | orchestrator | โ”‚ โ”‚ path='/cirros-dev/cirros/releases/download/0.6.3/ciโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415622 | orchestrator | โ”‚ โ”‚ โ”‚ params='', โ”‚ โ”‚ 2025-09-08 01:17:37.415633 | orchestrator | โ”‚ โ”‚ โ”‚ query='', โ”‚ โ”‚ 2025-09-08 01:17:37.415644 | orchestrator | โ”‚ โ”‚ โ”‚ fragment='' โ”‚ โ”‚ 2025-09-08 01:17:37.415654 | orchestrator | โ”‚ โ”‚ ) โ”‚ โ”‚ 2025-09-08 01:17:37.415665 | orchestrator | โ”‚ โ”‚ previous_image = None โ”‚ โ”‚ 2025-09-08 01:17:37.415676 | orchestrator | โ”‚ โ”‚ r = โ”‚ โ”‚ 2025-09-08 01:17:37.415687 | orchestrator | โ”‚ โ”‚ self = โ”‚ โ”‚ 2025-09-08 01:17:37.415709 | orchestrator | โ”‚ โ”‚ separator = ' ' โ”‚ โ”‚ 2025-09-08 01:17:37.415720 | orchestrator | โ”‚ โ”‚ sorted_versions = ['0.6.2', '0.6.3'] โ”‚ โ”‚ 2025-09-08 01:17:37.415731 | orchestrator | โ”‚ โ”‚ upstream_checksum = '' โ”‚ โ”‚ 2025-09-08 01:17:37.415742 | orchestrator | โ”‚ โ”‚ url = 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415759 | orchestrator | โ”‚ โ”‚ version = '0.6.3' โ”‚ โ”‚ 2025-09-08 01:17:37.415770 | orchestrator | โ”‚ โ”‚ versions = { โ”‚ โ”‚ 2025-09-08 01:17:37.415781 | orchestrator | โ”‚ โ”‚ โ”‚ '0.6.2': { โ”‚ โ”‚ 2025-09-08 01:17:37.415792 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'url': โ”‚ โ”‚ 2025-09-08 01:17:37.415802 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415813 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'meta': { โ”‚ โ”‚ 2025-09-08 01:17:37.415824 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_source': โ”‚ โ”‚ 2025-09-08 01:17:37.415835 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415846 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_build_date': '2023-05-30' โ”‚ โ”‚ 2025-09-08 01:17:37.415857 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.415868 | orchestrator | โ”‚ โ”‚ โ”‚ }, โ”‚ โ”‚ 2025-09-08 01:17:37.415879 | orchestrator | โ”‚ โ”‚ โ”‚ '0.6.3': { โ”‚ โ”‚ 2025-09-08 01:17:37.415890 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'url': โ”‚ โ”‚ 2025-09-08 01:17:37.415901 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415912 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'meta': { โ”‚ โ”‚ 2025-09-08 01:17:37.415923 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_source': โ”‚ โ”‚ 2025-09-08 01:17:37.415934 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.415945 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_build_date': '2024-09-26' โ”‚ โ”‚ 2025-09-08 01:17:37.415966 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.415977 | orchestrator | โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.415988 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.415999 | orchestrator | โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ 2025-09-08 01:17:37.416011 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.416022 | orchestrator | โ”‚ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:701 โ”‚ 2025-09-08 01:17:37.416032 | orchestrator | โ”‚ in set_properties โ”‚ 2025-09-08 01:17:37.416043 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.416059 | orchestrator | โ”‚ 698 โ”‚ โ”‚ โ”‚ โ”‚ 2025-09-08 01:17:37.439556 | orchestrator | โ”‚ 699 โ”‚ โ”‚ โ”‚ cloud_image = cloud_images[name] โ”‚ 2025-09-08 01:17:37.439591 | orchestrator | โ”‚ 700 โ”‚ โ”‚ โ”‚ real_image_size = int( โ”‚ 2025-09-08 01:17:37.439602 | orchestrator | โ”‚ โฑ 701 โ”‚ โ”‚ โ”‚ โ”‚ Decimal(cloud_image.size / 2**30).quantize( โ”‚ 2025-09-08 01:17:37.439613 | orchestrator | โ”‚ 702 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ Decimal("1."), rounding=ROUND_UP โ”‚ 2025-09-08 01:17:37.439624 | orchestrator | โ”‚ 703 โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ 2025-09-08 01:17:37.439634 | orchestrator | โ”‚ 704 โ”‚ โ”‚ โ”‚ ) โ”‚ 2025-09-08 01:17:37.439658 | orchestrator | โ”‚ โ”‚ 2025-09-08 01:17:37.439669 | orchestrator | โ”‚ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ locals โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ 2025-09-08 01:17:37.439681 | orchestrator | โ”‚ โ”‚ cloud_image = openstack.image.v2.image.Image(name=Cirros 0.6.3, โ”‚ โ”‚ 2025-09-08 01:17:37.439692 | orchestrator | โ”‚ โ”‚ disk_format=qcow2, container_format=bare, โ”‚ โ”‚ 2025-09-08 01:17:37.439702 | orchestrator | โ”‚ โ”‚ visibility=private, size=None, virtual_size=None, โ”‚ โ”‚ 2025-09-08 01:17:37.439713 | orchestrator | โ”‚ โ”‚ status=queued, checksum=None, protected=False, โ”‚ โ”‚ 2025-09-08 01:17:37.439724 | orchestrator | โ”‚ โ”‚ min_ram=32, min_disk=1, โ”‚ โ”‚ 2025-09-08 01:17:37.439734 | orchestrator | โ”‚ โ”‚ owner=48d4d40357694204a4b0be96199666b9, โ”‚ โ”‚ 2025-09-08 01:17:37.439745 | orchestrator | โ”‚ โ”‚ os_hidden=False, os_hash_algo=None, โ”‚ โ”‚ 2025-09-08 01:17:37.439755 | orchestrator | โ”‚ โ”‚ os_hash_value=None, โ”‚ โ”‚ 2025-09-08 01:17:37.439766 | orchestrator | โ”‚ โ”‚ id=7beea5db-7e01-4804-9271-800445613a3f, โ”‚ โ”‚ 2025-09-08 01:17:37.439776 | orchestrator | โ”‚ โ”‚ created_at=2025-09-08T01:16:14Z, โ”‚ โ”‚ 2025-09-08 01:17:37.439787 | orchestrator | โ”‚ โ”‚ updated_at=2025-09-08T01:17:24Z, โ”‚ โ”‚ 2025-09-08 01:17:37.439798 | orchestrator | โ”‚ โ”‚ tags=['managed_by_osism'], โ”‚ โ”‚ 2025-09-08 01:17:37.439808 | orchestrator | โ”‚ โ”‚ file=/v2/images/7beea5db-7e01-4804-9271-800445613a3โ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.439819 | orchestrator | โ”‚ โ”‚ schema=/v2/schemas/image, โ”‚ โ”‚ 2025-09-08 01:17:37.439830 | orchestrator | โ”‚ โ”‚ properties={'owner_specified.openstack.md5': '', โ”‚ โ”‚ 2025-09-08 01:17:37.439840 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.sha256': '', โ”‚ โ”‚ 2025-09-08 01:17:37.439851 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.object': 'images/Cirros โ”‚ โ”‚ 2025-09-08 01:17:37.439862 | orchestrator | โ”‚ โ”‚ 0.6.3', 'os_glance_importing_to_stores': '', โ”‚ โ”‚ 2025-09-08 01:17:37.439872 | orchestrator | โ”‚ โ”‚ 'os_glance_failed_import': 'rbd'}, โ”‚ โ”‚ 2025-09-08 01:17:37.439883 | orchestrator | โ”‚ โ”‚ location=Munch({'cloud': 'envvars', 'region_name': โ”‚ โ”‚ 2025-09-08 01:17:37.439894 | orchestrator | โ”‚ โ”‚ '', 'zone': None, 'project': Munch({'id': โ”‚ โ”‚ 2025-09-08 01:17:37.439904 | orchestrator | โ”‚ โ”‚ '48d4d40357694204a4b0be96199666b9', 'name': 'admin', โ”‚ โ”‚ 2025-09-08 01:17:37.439915 | orchestrator | โ”‚ โ”‚ 'domain_id': None, 'domain_name': 'Default'})})) โ”‚ โ”‚ 2025-09-08 01:17:37.439926 | orchestrator | โ”‚ โ”‚ cloud_images = { โ”‚ โ”‚ 2025-09-08 01:17:37.439937 | orchestrator | โ”‚ โ”‚ โ”‚ 'Cirros 0.6.3': โ”‚ โ”‚ 2025-09-08 01:17:37.439947 | orchestrator | โ”‚ โ”‚ openstack.image.v2.image.Image(name=Cirros 0.6.3, โ”‚ โ”‚ 2025-09-08 01:17:37.439958 | orchestrator | โ”‚ โ”‚ disk_format=qcow2, container_format=bare, โ”‚ โ”‚ 2025-09-08 01:17:37.439968 | orchestrator | โ”‚ โ”‚ visibility=private, size=None, virtual_size=None, โ”‚ โ”‚ 2025-09-08 01:17:37.439979 | orchestrator | โ”‚ โ”‚ status=queued, checksum=None, protected=False, โ”‚ โ”‚ 2025-09-08 01:17:37.439990 | orchestrator | โ”‚ โ”‚ min_ram=32, min_disk=1, โ”‚ โ”‚ 2025-09-08 01:17:37.440000 | orchestrator | โ”‚ โ”‚ owner=48d4d40357694204a4b0be96199666b9, โ”‚ โ”‚ 2025-09-08 01:17:37.440017 | orchestrator | โ”‚ โ”‚ os_hidden=False, os_hash_algo=None, โ”‚ โ”‚ 2025-09-08 01:17:37.440028 | orchestrator | โ”‚ โ”‚ os_hash_value=None, โ”‚ โ”‚ 2025-09-08 01:17:37.440038 | orchestrator | โ”‚ โ”‚ id=7beea5db-7e01-4804-9271-800445613a3f, โ”‚ โ”‚ 2025-09-08 01:17:37.440049 | orchestrator | โ”‚ โ”‚ created_at=2025-09-08T01:16:14Z, โ”‚ โ”‚ 2025-09-08 01:17:37.440068 | orchestrator | โ”‚ โ”‚ updated_at=2025-09-08T01:17:24Z, โ”‚ โ”‚ 2025-09-08 01:17:37.440088 | orchestrator | โ”‚ โ”‚ tags=['managed_by_osism'], โ”‚ โ”‚ 2025-09-08 01:17:37.440099 | orchestrator | โ”‚ โ”‚ file=/v2/images/7beea5db-7e01-4804-9271-800445613a3โ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.440110 | orchestrator | โ”‚ โ”‚ schema=/v2/schemas/image, โ”‚ โ”‚ 2025-09-08 01:17:37.440121 | orchestrator | โ”‚ โ”‚ properties={'owner_specified.openstack.md5': '', โ”‚ โ”‚ 2025-09-08 01:17:37.440132 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.sha256': '', โ”‚ โ”‚ 2025-09-08 01:17:37.440142 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.object': 'images/Cirros โ”‚ โ”‚ 2025-09-08 01:17:37.440153 | orchestrator | โ”‚ โ”‚ 0.6.3', 'os_glance_importing_to_stores': '', โ”‚ โ”‚ 2025-09-08 01:17:37.440163 | orchestrator | โ”‚ โ”‚ 'os_glance_failed_import': 'rbd'}, โ”‚ โ”‚ 2025-09-08 01:17:37.440174 | orchestrator | โ”‚ โ”‚ location=Munch({'cloud': 'envvars', 'region_name': โ”‚ โ”‚ 2025-09-08 01:17:37.440185 | orchestrator | โ”‚ โ”‚ '', 'zone': None, 'project': Munch({'id': โ”‚ โ”‚ 2025-09-08 01:17:37.440195 | orchestrator | โ”‚ โ”‚ '48d4d40357694204a4b0be96199666b9', 'name': 'admin', โ”‚ โ”‚ 2025-09-08 01:17:37.440206 | orchestrator | โ”‚ โ”‚ 'domain_id': None, 'domain_name': 'Default'})})), โ”‚ โ”‚ 2025-09-08 01:17:37.440216 | orchestrator | โ”‚ โ”‚ โ”‚ 'Cirros 0.6.2': โ”‚ โ”‚ 2025-09-08 01:17:37.440227 | orchestrator | โ”‚ โ”‚ openstack.image.v2.image.Image(architecture=x86_64, โ”‚ โ”‚ 2025-09-08 01:17:37.440238 | orchestrator | โ”‚ โ”‚ hw_disk_bus=scsi, hw_rng_model=virtio, โ”‚ โ”‚ 2025-09-08 01:17:37.440248 | orchestrator | โ”‚ โ”‚ hw_scsi_model=virtio-scsi, hw_watchdog_action=reset, โ”‚ โ”‚ 2025-09-08 01:17:37.440259 | orchestrator | โ”‚ โ”‚ hypervisor_type=qemu, os_distro=cirros, โ”‚ โ”‚ 2025-09-08 01:17:37.440300 | orchestrator | โ”‚ โ”‚ os_version=0.6.2, name=Cirros 0.6.2, โ”‚ โ”‚ 2025-09-08 01:17:37.440313 | orchestrator | โ”‚ โ”‚ disk_format=raw, container_format=bare, โ”‚ โ”‚ 2025-09-08 01:17:37.440324 | orchestrator | โ”‚ โ”‚ visibility=public, size=117440512, โ”‚ โ”‚ 2025-09-08 01:17:37.440334 | orchestrator | โ”‚ โ”‚ virtual_size=117440512, status=active, โ”‚ โ”‚ 2025-09-08 01:17:37.440345 | orchestrator | โ”‚ โ”‚ checksum=4245576e3df99ea1211871b8b9514d3b, โ”‚ โ”‚ 2025-09-08 01:17:37.440356 | orchestrator | โ”‚ โ”‚ protected=False, min_ram=32, min_disk=1, โ”‚ โ”‚ 2025-09-08 01:17:37.440367 | orchestrator | โ”‚ โ”‚ owner=48d4d40357694204a4b0be96199666b9, โ”‚ โ”‚ 2025-09-08 01:17:37.440377 | orchestrator | โ”‚ โ”‚ os_hidden=False, os_hash_algo=sha512, โ”‚ โ”‚ 2025-09-08 01:17:37.440388 | orchestrator | โ”‚ โ”‚ os_hash_value=dbb480bdc4f13ead7e00b62766df2815ddc8dโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.440398 | orchestrator | โ”‚ โ”‚ id=687b7344-20ac-4df3-b397-4589c81ec2df, โ”‚ โ”‚ 2025-09-08 01:17:37.440409 | orchestrator | โ”‚ โ”‚ created_at=2025-09-08T01:15:56Z, โ”‚ โ”‚ 2025-09-08 01:17:37.440420 | orchestrator | โ”‚ โ”‚ updated_at=2025-09-08T01:16:14Z, tags=['os:cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.440430 | orchestrator | โ”‚ โ”‚ 'managed_by_osism'], โ”‚ โ”‚ 2025-09-08 01:17:37.440447 | orchestrator | โ”‚ โ”‚ file=/v2/images/687b7344-20ac-4df3-b397-4589c81ec2dโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.440458 | orchestrator | โ”‚ โ”‚ schema=/v2/schemas/image, โ”‚ โ”‚ 2025-09-08 01:17:37.440469 | orchestrator | โ”‚ โ”‚ properties={'owner_specified.openstack.md5': '', โ”‚ โ”‚ 2025-09-08 01:17:37.440479 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.sha256': '', โ”‚ โ”‚ 2025-09-08 01:17:37.440490 | orchestrator | โ”‚ โ”‚ 'owner_specified.openstack.object': 'images/Cirros โ”‚ โ”‚ 2025-09-08 01:17:37.440501 | orchestrator | โ”‚ โ”‚ 0.6.2', 'os_glance_importing_to_stores': '', โ”‚ โ”‚ 2025-09-08 01:17:37.440511 | orchestrator | โ”‚ โ”‚ 'os_glance_failed_import': '', 'replace_frequency': โ”‚ โ”‚ 2025-09-08 01:17:37.440522 | orchestrator | โ”‚ โ”‚ 'never', 'uuid_validity': 'none', 'provided_until': โ”‚ โ”‚ 2025-09-08 01:17:37.440532 | orchestrator | โ”‚ โ”‚ 'none', 'image_description': 'Cirros', 'image_name': โ”‚ โ”‚ 2025-09-08 01:17:37.440543 | orchestrator | โ”‚ โ”‚ 'Cirros', 'internal_version': '0.6.2', โ”‚ โ”‚ 2025-09-08 01:17:37.440554 | orchestrator | โ”‚ โ”‚ 'image_original_user': 'cirros', 'image_source': โ”‚ โ”‚ 2025-09-08 01:17:37.440564 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.440575 | orchestrator | โ”‚ โ”‚ 'image_build_date': '2023-05-30', 'stores': 'rbd'}, โ”‚ โ”‚ 2025-09-08 01:17:37.440585 | orchestrator | โ”‚ โ”‚ location=Munch({'cloud': 'envvars', 'region_name': โ”‚ โ”‚ 2025-09-08 01:17:37.440596 | orchestrator | โ”‚ โ”‚ '', 'zone': None, 'project': Munch({'id': โ”‚ โ”‚ 2025-09-08 01:17:37.440606 | orchestrator | โ”‚ โ”‚ '48d4d40357694204a4b0be96199666b9', 'name': 'admin', โ”‚ โ”‚ 2025-09-08 01:17:37.440622 | orchestrator | โ”‚ โ”‚ 'domain_id': None, 'domain_name': 'Default'})})) โ”‚ โ”‚ 2025-09-08 01:17:37.512922 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.512991 | orchestrator | โ”‚ โ”‚ image = { โ”‚ โ”‚ 2025-09-08 01:17:37.513005 | orchestrator | โ”‚ โ”‚ โ”‚ 'name': 'Cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.513017 | orchestrator | โ”‚ โ”‚ โ”‚ 'enable': True, โ”‚ โ”‚ 2025-09-08 01:17:37.513028 | orchestrator | โ”‚ โ”‚ โ”‚ 'format': 'qcow2', โ”‚ โ”‚ 2025-09-08 01:17:37.513039 | orchestrator | โ”‚ โ”‚ โ”‚ 'login': 'cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.513050 | orchestrator | โ”‚ โ”‚ โ”‚ 'password': 'gocubsgo', โ”‚ โ”‚ 2025-09-08 01:17:37.513061 | orchestrator | โ”‚ โ”‚ โ”‚ 'min_disk': 1, โ”‚ โ”‚ 2025-09-08 01:17:37.513071 | orchestrator | โ”‚ โ”‚ โ”‚ 'min_ram': 32, โ”‚ โ”‚ 2025-09-08 01:17:37.513082 | orchestrator | โ”‚ โ”‚ โ”‚ 'status': 'active', โ”‚ โ”‚ 2025-09-08 01:17:37.513093 | orchestrator | โ”‚ โ”‚ โ”‚ 'visibility': 'public', โ”‚ โ”‚ 2025-09-08 01:17:37.513103 | orchestrator | โ”‚ โ”‚ โ”‚ 'multi': False, โ”‚ โ”‚ 2025-09-08 01:17:37.513114 | orchestrator | โ”‚ โ”‚ โ”‚ ... +3 โ”‚ โ”‚ 2025-09-08 01:17:37.513124 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.513148 | orchestrator | โ”‚ โ”‚ meta = { โ”‚ โ”‚ 2025-09-08 01:17:37.513160 | orchestrator | โ”‚ โ”‚ โ”‚ 'architecture': 'x86_64', โ”‚ โ”‚ 2025-09-08 01:17:37.513171 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_disk_bus': 'scsi', โ”‚ โ”‚ 2025-09-08 01:17:37.513182 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_rng_model': 'virtio', โ”‚ โ”‚ 2025-09-08 01:17:37.513209 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_scsi_model': 'virtio-scsi', โ”‚ โ”‚ 2025-09-08 01:17:37.513220 | orchestrator | โ”‚ โ”‚ โ”‚ 'hw_watchdog_action': 'reset', โ”‚ โ”‚ 2025-09-08 01:17:37.513231 | orchestrator | โ”‚ โ”‚ โ”‚ 'hypervisor_type': 'qemu', โ”‚ โ”‚ 2025-09-08 01:17:37.513242 | orchestrator | โ”‚ โ”‚ โ”‚ 'os_distro': 'cirros', โ”‚ โ”‚ 2025-09-08 01:17:37.513253 | orchestrator | โ”‚ โ”‚ โ”‚ 'replace_frequency': 'never', โ”‚ โ”‚ 2025-09-08 01:17:37.513263 | orchestrator | โ”‚ โ”‚ โ”‚ 'uuid_validity': 'none', โ”‚ โ”‚ 2025-09-08 01:17:37.513307 | orchestrator | โ”‚ โ”‚ โ”‚ 'provided_until': 'none', โ”‚ โ”‚ 2025-09-08 01:17:37.513319 | orchestrator | โ”‚ โ”‚ โ”‚ ... +2 โ”‚ โ”‚ 2025-09-08 01:17:37.513336 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.513347 | orchestrator | โ”‚ โ”‚ name = 'Cirros 0.6.3' โ”‚ โ”‚ 2025-09-08 01:17:37.513358 | orchestrator | โ”‚ โ”‚ self = โ”‚ โ”‚ 2025-09-08 01:17:37.513379 | orchestrator | โ”‚ โ”‚ upstream_checksum = '' โ”‚ โ”‚ 2025-09-08 01:17:37.513390 | orchestrator | โ”‚ โ”‚ version = '0.6.3' โ”‚ โ”‚ 2025-09-08 01:17:37.513401 | orchestrator | โ”‚ โ”‚ versions = { โ”‚ โ”‚ 2025-09-08 01:17:37.513412 | orchestrator | โ”‚ โ”‚ โ”‚ '0.6.2': { โ”‚ โ”‚ 2025-09-08 01:17:37.513422 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'url': โ”‚ โ”‚ 2025-09-08 01:17:37.513433 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.513444 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'meta': { โ”‚ โ”‚ 2025-09-08 01:17:37.513454 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_source': โ”‚ โ”‚ 2025-09-08 01:17:37.513465 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.513476 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_build_date': '2023-05-30' โ”‚ โ”‚ 2025-09-08 01:17:37.513486 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.513497 | orchestrator | โ”‚ โ”‚ โ”‚ }, โ”‚ โ”‚ 2025-09-08 01:17:37.513509 | orchestrator | โ”‚ โ”‚ โ”‚ '0.6.3': { โ”‚ โ”‚ 2025-09-08 01:17:37.513522 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'url': โ”‚ โ”‚ 2025-09-08 01:17:37.513535 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.513562 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ 'meta': { โ”‚ โ”‚ 2025-09-08 01:17:37.513575 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_source': โ”‚ โ”‚ 2025-09-08 01:17:37.513587 | orchestrator | โ”‚ โ”‚ 'https://github.com/cirros-dev/cirros/releases/downโ€ฆ โ”‚ โ”‚ 2025-09-08 01:17:37.513599 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 'image_build_date': '2024-09-26' โ”‚ โ”‚ 2025-09-08 01:17:37.513611 | orchestrator | โ”‚ โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.513623 | orchestrator | โ”‚ โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.513635 | orchestrator | โ”‚ โ”‚ } โ”‚ โ”‚ 2025-09-08 01:17:37.513655 | orchestrator | โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚ 2025-09-08 01:17:37.513670 | orchestrator | โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ 2025-09-08 01:17:37.513683 | orchestrator | TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' 2025-09-08 01:17:37.955614 | orchestrator | ERROR 2025-09-08 01:17:37.955864 | orchestrator | { 2025-09-08 01:17:37.955905 | orchestrator | "delta": "0:03:13.672409", 2025-09-08 01:17:37.955930 | orchestrator | "end": "2025-09-08 01:17:37.817368", 2025-09-08 01:17:37.955978 | orchestrator | "msg": "non-zero return code", 2025-09-08 01:17:37.955998 | orchestrator | "rc": 1, 2025-09-08 01:17:37.956018 | orchestrator | "start": "2025-09-08 01:14:24.144959" 2025-09-08 01:17:37.956036 | orchestrator | } failure 2025-09-08 01:17:37.968593 | 2025-09-08 01:17:37.968704 | PLAY RECAP 2025-09-08 01:17:37.968758 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-08 01:17:37.968783 | 2025-09-08 01:17:38.190664 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-08 01:17:38.193037 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-08 01:17:39.005284 | 2025-09-08 01:17:39.005483 | PLAY [Post output play] 2025-09-08 01:17:39.023084 | 2025-09-08 01:17:39.023270 | LOOP [stage-output : Register sources] 2025-09-08 01:17:39.104999 | 2025-09-08 01:17:39.105374 | TASK [stage-output : Check sudo] 2025-09-08 01:17:39.977377 | orchestrator | sudo: a password is required 2025-09-08 01:17:40.146333 | orchestrator | ok: Runtime: 0:00:00.017588 2025-09-08 01:17:40.161299 | 2025-09-08 01:17:40.161475 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-08 01:17:40.201692 | 2025-09-08 01:17:40.202047 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-08 01:17:40.281133 | orchestrator | ok 2025-09-08 01:17:40.291196 | 2025-09-08 01:17:40.291353 | LOOP [stage-output : Ensure target folders exist] 2025-09-08 01:17:40.757155 | orchestrator | ok: "docs" 2025-09-08 01:17:40.757516 | 2025-09-08 01:17:41.020111 | orchestrator | ok: "artifacts" 2025-09-08 01:17:41.275437 | orchestrator | ok: "logs" 2025-09-08 01:17:41.296995 | 2025-09-08 01:17:41.297145 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-08 01:17:41.330234 | 2025-09-08 01:17:41.330447 | TASK [stage-output : Make all log files readable] 2025-09-08 01:17:41.599903 | orchestrator | ok 2025-09-08 01:17:41.608405 | 2025-09-08 01:17:41.608539 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-08 01:17:41.644233 | orchestrator | skipping: Conditional result was False 2025-09-08 01:17:41.659447 | 2025-09-08 01:17:41.659603 | TASK [stage-output : Discover log files for compression] 2025-09-08 01:17:41.684810 | orchestrator | skipping: Conditional result was False 2025-09-08 01:17:41.696877 | 2025-09-08 01:17:41.697037 | LOOP [stage-output : Archive everything from logs] 2025-09-08 01:17:41.741770 | 2025-09-08 01:17:41.742043 | PLAY [Post cleanup play] 2025-09-08 01:17:41.751508 | 2025-09-08 01:17:41.751618 | TASK [Set cloud fact (Zuul deployment)] 2025-09-08 01:17:41.809857 | orchestrator | ok 2025-09-08 01:17:41.823517 | 2025-09-08 01:17:41.823673 | TASK [Set cloud fact (local deployment)] 2025-09-08 01:17:41.848685 | orchestrator | skipping: Conditional result was False 2025-09-08 01:17:41.863129 | 2025-09-08 01:17:41.863291 | TASK [Clean the cloud environment] 2025-09-08 01:17:42.400322 | orchestrator | 2025-09-08 01:17:42 - clean up servers 2025-09-08 01:17:43.676632 | orchestrator | 2025-09-08 01:17:43 - testbed-manager 2025-09-08 01:17:43.767202 | orchestrator | 2025-09-08 01:17:43 - testbed-node-3 2025-09-08 01:17:43.850503 | orchestrator | 2025-09-08 01:17:43 - testbed-node-1 2025-09-08 01:17:43.938669 | orchestrator | 2025-09-08 01:17:43 - testbed-node-5 2025-09-08 01:17:44.032670 | orchestrator | 2025-09-08 01:17:44 - testbed-node-4 2025-09-08 01:17:44.120516 | orchestrator | 2025-09-08 01:17:44 - testbed-node-2 2025-09-08 01:17:44.205656 | orchestrator | 2025-09-08 01:17:44 - testbed-node-0 2025-09-08 01:17:44.297900 | orchestrator | 2025-09-08 01:17:44 - clean up keypairs 2025-09-08 01:17:44.315132 | orchestrator | 2025-09-08 01:17:44 - testbed 2025-09-08 01:17:44.337817 | orchestrator | 2025-09-08 01:17:44 - wait for servers to be gone 2025-09-08 01:17:55.232856 | orchestrator | 2025-09-08 01:17:55 - clean up ports 2025-09-08 01:17:55.405651 | orchestrator | 2025-09-08 01:17:55 - 6cbfebcc-098c-440a-8e05-5953c5a564d1 2025-09-08 01:17:55.655252 | orchestrator | 2025-09-08 01:17:55 - 76316ffa-c19e-4bdb-a7ff-b4cd50982275 2025-09-08 01:17:55.862593 | orchestrator | 2025-09-08 01:17:55 - 79d4ef46-d0ff-43d0-bfef-d40754170b8f 2025-09-08 01:17:56.144199 | orchestrator | 2025-09-08 01:17:56 - 7e620f71-647f-446b-b106-36ecf67f67e6 2025-09-08 01:17:56.375065 | orchestrator | 2025-09-08 01:17:56 - 8df3611a-d36e-4115-b6bb-bf41d74dd155 2025-09-08 01:17:56.601426 | orchestrator | 2025-09-08 01:17:56 - a55eb4f0-a2c0-4505-83c7-9b95e021a8b1 2025-09-08 01:17:57.097789 | orchestrator | 2025-09-08 01:17:57 - c112827d-6108-420c-b678-09a8350007a9 2025-09-08 01:17:57.394437 | orchestrator | 2025-09-08 01:17:57 - clean up volumes 2025-09-08 01:17:57.499484 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-manager-base 2025-09-08 01:17:57.538784 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-0-node-base 2025-09-08 01:17:57.584841 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-1-node-base 2025-09-08 01:17:57.623818 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-2-node-base 2025-09-08 01:17:57.662421 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-5-node-base 2025-09-08 01:17:57.702130 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-4-node-base 2025-09-08 01:17:57.741209 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-3-node-base 2025-09-08 01:17:57.783196 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-5-node-5 2025-09-08 01:17:57.825369 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-1-node-4 2025-09-08 01:17:57.866774 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-4-node-4 2025-09-08 01:17:57.912860 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-8-node-5 2025-09-08 01:17:57.956004 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-0-node-3 2025-09-08 01:17:57.997085 | orchestrator | 2025-09-08 01:17:57 - testbed-volume-2-node-5 2025-09-08 01:17:58.039079 | orchestrator | 2025-09-08 01:17:58 - testbed-volume-6-node-3 2025-09-08 01:17:58.083861 | orchestrator | 2025-09-08 01:17:58 - testbed-volume-3-node-3 2025-09-08 01:17:58.137348 | orchestrator | 2025-09-08 01:17:58 - testbed-volume-7-node-4 2025-09-08 01:17:58.178958 | orchestrator | 2025-09-08 01:17:58 - disconnect routers 2025-09-08 01:17:58.291138 | orchestrator | 2025-09-08 01:17:58 - testbed 2025-09-08 01:17:59.407568 | orchestrator | 2025-09-08 01:17:59 - clean up subnets 2025-09-08 01:17:59.459758 | orchestrator | 2025-09-08 01:17:59 - subnet-testbed-management 2025-09-08 01:17:59.655570 | orchestrator | 2025-09-08 01:17:59 - clean up networks 2025-09-08 01:17:59.836545 | orchestrator | 2025-09-08 01:17:59 - net-testbed-management 2025-09-08 01:18:00.154352 | orchestrator | 2025-09-08 01:18:00 - clean up security groups 2025-09-08 01:18:00.195545 | orchestrator | 2025-09-08 01:18:00 - testbed-management 2025-09-08 01:18:00.323423 | orchestrator | 2025-09-08 01:18:00 - testbed-node 2025-09-08 01:18:00.448264 | orchestrator | 2025-09-08 01:18:00 - clean up floating ips 2025-09-08 01:18:00.486118 | orchestrator | 2025-09-08 01:18:00 - 81.163.193.173 2025-09-08 01:18:00.858236 | orchestrator | 2025-09-08 01:18:00 - clean up routers 2025-09-08 01:18:01.511691 | orchestrator | 2025-09-08 01:18:00 - testbed 2025-09-08 01:18:02.018399 | orchestrator | ok: Runtime: 0:00:19.688050 2025-09-08 01:18:02.022825 | 2025-09-08 01:18:02.023052 | PLAY RECAP 2025-09-08 01:18:02.023164 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-08 01:18:02.023214 | 2025-09-08 01:18:02.162544 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-08 01:18:02.163657 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-08 01:18:02.911443 | 2025-09-08 01:18:02.911625 | PLAY [Cleanup play] 2025-09-08 01:18:02.928374 | 2025-09-08 01:18:02.928518 | TASK [Set cloud fact (Zuul deployment)] 2025-09-08 01:18:03.009578 | orchestrator | ok 2025-09-08 01:18:03.019995 | 2025-09-08 01:18:03.020153 | TASK [Set cloud fact (local deployment)] 2025-09-08 01:18:03.055528 | orchestrator | skipping: Conditional result was False 2025-09-08 01:18:03.073399 | 2025-09-08 01:18:03.073549 | TASK [Clean the cloud environment] 2025-09-08 01:18:04.215717 | orchestrator | 2025-09-08 01:18:04 - clean up servers 2025-09-08 01:18:04.691790 | orchestrator | 2025-09-08 01:18:04 - clean up keypairs 2025-09-08 01:18:04.706122 | orchestrator | 2025-09-08 01:18:04 - wait for servers to be gone 2025-09-08 01:18:04.743734 | orchestrator | 2025-09-08 01:18:04 - clean up ports 2025-09-08 01:18:04.812274 | orchestrator | 2025-09-08 01:18:04 - clean up volumes 2025-09-08 01:18:04.877217 | orchestrator | 2025-09-08 01:18:04 - disconnect routers 2025-09-08 01:18:04.900109 | orchestrator | 2025-09-08 01:18:04 - clean up subnets 2025-09-08 01:18:04.922203 | orchestrator | 2025-09-08 01:18:04 - clean up networks 2025-09-08 01:18:05.090385 | orchestrator | 2025-09-08 01:18:05 - clean up security groups 2025-09-08 01:18:05.123445 | orchestrator | 2025-09-08 01:18:05 - clean up floating ips 2025-09-08 01:18:05.146674 | orchestrator | 2025-09-08 01:18:05 - clean up routers 2025-09-08 01:18:05.613026 | orchestrator | ok: Runtime: 0:00:01.321439 2025-09-08 01:18:05.617457 | 2025-09-08 01:18:05.617651 | PLAY RECAP 2025-09-08 01:18:05.617786 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-08 01:18:05.617855 | 2025-09-08 01:18:05.759832 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-08 01:18:05.762367 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-08 01:18:06.536287 | 2025-09-08 01:18:06.536457 | PLAY [Base post-fetch] 2025-09-08 01:18:06.552172 | 2025-09-08 01:18:06.552393 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-08 01:18:06.619093 | orchestrator | skipping: Conditional result was False 2025-09-08 01:18:06.634336 | 2025-09-08 01:18:06.634542 | TASK [fetch-output : Set log path for single node] 2025-09-08 01:18:06.693999 | orchestrator | ok 2025-09-08 01:18:06.702650 | 2025-09-08 01:18:06.702783 | LOOP [fetch-output : Ensure local output dirs] 2025-09-08 01:18:07.173397 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/work/logs" 2025-09-08 01:18:07.462044 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/work/artifacts" 2025-09-08 01:18:07.727100 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/47f168dc1bd94c728bdb6d46c2dda984/work/docs" 2025-09-08 01:18:07.750545 | 2025-09-08 01:18:07.750734 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-08 01:18:08.679163 | orchestrator | changed: .d..t...... ./ 2025-09-08 01:18:08.679470 | orchestrator | changed: All items complete 2025-09-08 01:18:08.679520 | 2025-09-08 01:18:09.413410 | orchestrator | changed: .d..t...... ./ 2025-09-08 01:18:10.145133 | orchestrator | changed: .d..t...... ./ 2025-09-08 01:18:10.172588 | 2025-09-08 01:18:10.172748 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-08 01:18:10.214232 | orchestrator | skipping: Conditional result was False 2025-09-08 01:18:10.216804 | orchestrator | skipping: Conditional result was False 2025-09-08 01:18:10.229132 | 2025-09-08 01:18:10.229269 | PLAY RECAP 2025-09-08 01:18:10.229327 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-08 01:18:10.229355 | 2025-09-08 01:18:10.373235 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-08 01:18:10.375508 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-08 01:18:11.132314 | 2025-09-08 01:18:11.132490 | PLAY [Base post] 2025-09-08 01:18:11.147987 | 2025-09-08 01:18:11.148134 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-08 01:18:12.086458 | orchestrator | changed 2025-09-08 01:18:12.096487 | 2025-09-08 01:18:12.096605 | PLAY RECAP 2025-09-08 01:18:12.096682 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-08 01:18:12.096761 | 2025-09-08 01:18:12.218565 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-08 01:18:12.221483 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-08 01:18:13.018917 | 2025-09-08 01:18:13.019130 | PLAY [Base post-logs] 2025-09-08 01:18:13.030113 | 2025-09-08 01:18:13.030247 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-08 01:18:13.517480 | localhost | changed 2025-09-08 01:18:13.535310 | 2025-09-08 01:18:13.535533 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-08 01:18:13.565408 | localhost | ok 2025-09-08 01:18:13.570883 | 2025-09-08 01:18:13.571044 | TASK [Set zuul-log-path fact] 2025-09-08 01:18:13.589264 | localhost | ok 2025-09-08 01:18:13.600335 | 2025-09-08 01:18:13.600457 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-08 01:18:13.638479 | localhost | ok 2025-09-08 01:18:13.644497 | 2025-09-08 01:18:13.644684 | TASK [upload-logs : Create log directories] 2025-09-08 01:18:14.183935 | localhost | changed 2025-09-08 01:18:14.188300 | 2025-09-08 01:18:14.188465 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-08 01:18:14.721514 | localhost -> localhost | ok: Runtime: 0:00:00.008094 2025-09-08 01:18:14.730593 | 2025-09-08 01:18:14.730928 | TASK [upload-logs : Upload logs to log server] 2025-09-08 01:18:15.319733 | localhost | Output suppressed because no_log was given 2025-09-08 01:18:15.323347 | 2025-09-08 01:18:15.323517 | LOOP [upload-logs : Compress console log and json output] 2025-09-08 01:18:15.381532 | localhost | skipping: Conditional result was False 2025-09-08 01:18:15.386535 | localhost | skipping: Conditional result was False 2025-09-08 01:18:15.399601 | 2025-09-08 01:18:15.399902 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-08 01:18:15.461881 | localhost | skipping: Conditional result was False 2025-09-08 01:18:15.462486 | 2025-09-08 01:18:15.465893 | localhost | skipping: Conditional result was False 2025-09-08 01:18:15.474784 | 2025-09-08 01:18:15.475027 | LOOP [upload-logs : Upload console log and json output]